home / skills / jjuidev / jss / devops

devops skill

/.claude/skills/devops

This skill helps you deploy, manage, and secure cloud infrastructure across Cloudflare, Docker, GCP, and Kubernetes with CI/CD and GitOps.

npx playbooks add skill jjuidev/jss --skill devops

Review the files below or copy the command above to add this skill to your agents.

Files (29)
SKILL.md
3.2 KB
---
name: devops
description: Deploy to Cloudflare (Workers, R2, D1), Docker, GCP (Cloud Run, GKE), Kubernetes (kubectl, Helm). Use for serverless, containers, CI/CD, GitOps, security audit.
license: MIT
version: 2.0.0
---

# DevOps Skill

Deploy and manage cloud infrastructure across Cloudflare, Docker, Google Cloud, and Kubernetes.

## When to Use

- Deploy serverless apps to Cloudflare Workers/Pages
- Containerize apps with Docker, Docker Compose
- Manage GCP with gcloud CLI (Cloud Run, GKE, Cloud SQL)
- Kubernetes cluster management (kubectl, Helm)
- GitOps workflows (Argo CD, Flux)
- CI/CD pipelines, multi-region deployments
- Security audits, RBAC, network policies

## Platform Selection

| Need | Choose |
|------|--------|
| Sub-50ms latency globally | Cloudflare Workers |
| Large file storage (zero egress) | Cloudflare R2 |
| SQL database (global reads) | Cloudflare D1 |
| Containerized workloads | Docker + Cloud Run/GKE |
| Enterprise Kubernetes | GKE |
| Managed relational DB | Cloud SQL |
| Static site + API | Cloudflare Pages |
| Container orchestration | Kubernetes |
| Package management for K8s | Helm |

## Quick Start

```bash
# Cloudflare Worker
wrangler init my-worker && cd my-worker && wrangler deploy

# Docker
docker build -t myapp . && docker run -p 3000:3000 myapp

# GCP Cloud Run
gcloud run deploy my-service --image gcr.io/project/image --region us-central1

# Kubernetes
kubectl apply -f manifests/ && kubectl get pods
```

## Reference Navigation

### Cloudflare Platform
- `cloudflare-platform.md` - Edge computing overview
- `cloudflare-workers-basics.md` - Handler types, patterns
- `cloudflare-workers-advanced.md` - Performance, optimization
- `cloudflare-workers-apis.md` - Runtime APIs, bindings
- `cloudflare-r2-storage.md` - Object storage, S3 compatibility
- `cloudflare-d1-kv.md` - D1 SQLite, KV store
- `browser-rendering.md` - Puppeteer automation

### Docker
- `docker-basics.md` - Dockerfile, images, containers
- `docker-compose.md` - Multi-container apps

### Google Cloud
- `gcloud-platform.md` - gcloud CLI, authentication
- `gcloud-services.md` - Compute Engine, GKE, Cloud Run

### Kubernetes
- `kubernetes-basics.md` - Core concepts, architecture, workloads
- `kubernetes-kubectl.md` - Essential commands, debugging workflow
- `kubernetes-helm.md` / `kubernetes-helm-advanced.md` - Helm charts, templates
- `kubernetes-security.md` / `kubernetes-security-advanced.md` - RBAC, secrets
- `kubernetes-workflows.md` / `kubernetes-workflows-advanced.md` - GitOps, CI/CD
- `kubernetes-troubleshooting.md` / `kubernetes-troubleshooting-advanced.md` - Debug

### Scripts
- `scripts/cloudflare-deploy.py` - Automate Worker deployments
- `scripts/docker-optimize.py` - Analyze Dockerfiles

## Best Practices

**Security:** Non-root containers, RBAC, secrets in env vars, image scanning
**Performance:** Multi-stage builds, edge caching, resource limits
**Cost:** R2 for large egress, caching, right-size resources
**Development:** Docker Compose local dev, wrangler dev, version control IaC

## Resources

- Cloudflare: https://developers.cloudflare.com
- Docker: https://docs.docker.com
- GCP: https://cloud.google.com/docs
- Kubernetes: https://kubernetes.io/docs
- Helm: https://helm.sh/docs

Overview

This skill helps you deploy and manage applications across Cloudflare (Workers, R2, D1), Docker, Google Cloud (Cloud Run, GKE), and Kubernetes using kubectl and Helm. I provide practical commands, platform guidance, and automation scripts for serverless, container, CI/CD, GitOps, and security-oriented workflows. The goal is fast, repeatable deployments with a focus on performance, cost, and safety.

How this skill works

I inspect your target platform needs and recommend the appropriate runtime (edge worker, managed container, or Kubernetes). I provide step-by-step commands and patterns for building, deploying, and automating releases using standard CLIs like wrangler, docker, gcloud, kubectl, and Helm. For GitOps and CI/CD I outline workflows compatible with Argo CD/Flux and common pipeline tools, plus checks for RBAC and image/security scanning.

When to use it

  • Deploy low-latency, globally distributed code or static sites to Cloudflare Workers and Pages.
  • Containerize apps locally with Docker or Docker Compose for development and production images.
  • Run managed containers on Cloud Run for simple services or GKE for enterprise orchestration.
  • Manage Kubernetes clusters with kubectl and Helm for scalable apps and templated deployments.
  • Implement GitOps workflows or CI/CD pipelines for automated, auditable deployments.

Best practices

  • Use multi-stage Docker builds and non-root users to reduce image size and attack surface.
  • Store secrets securely (Kubernetes Secrets, sealed secrets, or platform secret stores) and avoid leaking in envs or logs.
  • Right-size resources, enable resource limits and readiness/liveness probes in Kubernetes.
  • Prefer edge caching and Cloudflare R2 for large object storage to lower egress costs.
  • Scan container images and enforce RBAC and network policies before production rollout.

Example use cases

  • Deploy a high-performance API to Cloudflare Workers for sub-50ms global latency.
  • Build and push a container image, then deploy to Cloud Run with zero config autoscaling.
  • Manage microservices on GKE using Helm charts, with Argo CD for GitOps-driven deployments.
  • Run local integration testing with Docker Compose and mirror production with Kubernetes manifests.
  • Automate Worker and R2 deployments with Python scripts for repeatable CI/CD steps.

FAQ

Which platform should I pick for global low-latency endpoints?

Use Cloudflare Workers for compute at the edge and Cloudflare Pages for static + API combos; choose Workers for sub-50ms global responses.

When should I use Cloud Run vs GKE?

Choose Cloud Run for simple, serverless containers with minimal ops; choose GKE for complex, highly orchestrated enterprise workloads.

How do I keep costs under control?

Use edge caching, R2 for large object storage, right-size resources, and enable autoscaling and quota controls.