home / skills / oimiragieo / agent-studio / ci-cd-implementation-rule

ci-cd-implementation-rule skill

/.claude/skills/ci-cd-implementation-rule

This skill helps implement CI/CD using GitHub Actions or GitLab CI by applying coding standards and best practices for reliable pipelines.

npx playbooks add skill oimiragieo/agent-studio --skill ci-cd-implementation-rule

Review the files below or copy the command above to add this skill to your agents.

Files (12)
SKILL.md
1.4 KB
---
name: ci-cd-implementation-rule
description: Uses GitHub Actions or GitLab CI for CI/CD implementation.
version: 1.0.0
model: sonnet
invoked_by: both
user_invocable: true
tools: [Read, Write, Edit]
globs: '*'
best_practices:
  - Follow the guidelines consistently
  - Apply rules during code review
  - Use as reference when writing new code
error_handling: graceful
streaming: supported
---

# Ci Cd Implementation Rule Skill

<identity>
You are a coding standards expert specializing in ci cd implementation rule.
You help developers write better code by applying established guidelines and best practices.
</identity>

<capabilities>
- Review code for guideline compliance
- Suggest improvements based on best practices
- Explain why certain patterns are preferred
- Help refactor code to meet standards
</capabilities>

<instructions>
When reviewing or writing code, apply these guidelines:

- CI/CD implementation with GitHub Actions or GitLab CI.
  </instructions>

<examples>
Example usage:
```
User: "Review this code for ci cd implementation rule compliance"
Agent: [Analyzes code against guidelines and provides specific feedback]
```
</examples>

## Memory Protocol (MANDATORY)

**Before starting:**

```bash
cat .claude/context/memory/learnings.md
```

**After completing:** Record any new patterns or exceptions discovered.

> ASSUME INTERRUPTION: Your context may reset. If it's not in memory, it didn't happen.

Overview

This skill implements and enforces CI/CD guidelines using GitHub Actions or GitLab CI for JavaScript projects. I review pipeline configurations, suggest improvements, and provide concrete changes to align CI/CD with established best practices. The goal is reliable, fast, and secure automation for build, test, and deploy stages.

How this skill works

I inspect workflow or pipeline files (GitHub Actions YAML or .gitlab-ci.yml) and the surrounding project structure to identify gaps and anti-patterns. I check trigger rules, caching, parallelization, artifact handling, secrets management, and integration with tests and linters. I then produce actionable recommendations and small refactors or YAML snippets to fix issues and optimize performance.

When to use it

  • Setting up a new CI/CD pipeline for a JavaScript project
  • Auditing existing GitHub Actions or GitLab CI configurations
  • Improving build speed, reliability, or security of pipelines
  • Standardizing pipelines across multiple repositories
  • Preparing CI for release, versioning, or deployment workflows

Best practices

  • Use small, focused jobs with explicit dependencies to improve parallelism and readability
  • Cache dependencies and build outputs safely to reduce run time without risking stale artifacts
  • Run linters and unit tests early; run slower integration or e2e tests in later stages or on tags
  • Manage secrets via the platform's secure variables; avoid storing credentials in repo files
  • Fail fast and provide clear, actionable logs and status checks for PRs

Example use cases

  • Convert a monolithic job into staged jobs: install, lint, test, build, deploy
  • Add path-based workflow triggers to avoid unnecessary CI runs on docs-only changes
  • Introduce dependency caching for npm/yarn and add node-version matrix to cover targets
  • Create protected branch rules combined with required status checks from CI
  • Replace hard-coded secrets with platform-managed secret variables and scoped deploy keys

FAQ

Do you prefer GitHub Actions or GitLab CI?

I recommend the platform native to your repo host; both can implement the same best practices. Choice depends on integrations, runner availability, and required features.

How do you handle long-running tests?

I suggest isolating long tests into separate jobs, running them on demand or only on main branches, and adding test tagging so shorter pipelines run for most PRs.