home / skills / pluginagentmarketplace / custom-plugin-ai-red-teaming / secure-deployment

secure-deployment skill

/skills/secure-deployment

This skill helps you deploy AI/ML models securely by enforcing defense-in-depth, zero-trust, and rigorous pre-deployment to runtime protections.

npx playbooks add skill pluginagentmarketplace/custom-plugin-ai-red-teaming --skill secure-deployment

Review the files below or copy the command above to add this skill to your agents.

Files (4)
SKILL.md
7.6 KB
---
name: secure-deployment
version: "2.0.0"
description: Security best practices for deploying AI/ML models to production environments
sasmp_version: "1.3.0"
bonded_agent: 06-api-security-tester
bond_type: SECONDARY_BOND
# Schema Definitions
input_schema:
  type: object
  required: [deployment_stage]
  properties:
    deployment_stage:
      type: string
      enum: [pre_deployment, deployment, runtime, all]
    environment:
      type: string
      enum: [development, staging, production]
output_schema:
  type: object
  properties:
    security_score:
      type: number
    checks_passed:
      type: integer
    checks_failed:
      type: integer
    recommendations:
      type: array
# Framework Mappings
owasp_llm_2025: [LLM03, LLM06]
nist_ai_rmf: [Govern, Manage]
---

# Secure AI Deployment

Deploy **AI/ML models securely** with defense-in-depth strategies and zero-trust architecture.

## Quick Reference

```yaml
Skill:       secure-deployment
Agent:       06-api-security-tester
OWASP:       LLM03 (Supply Chain), LLM06 (Excessive Agency)
NIST:        Govern, Manage
Use Case:    Secure production deployment
```

## Deployment Pipeline

```
Model Training → [Security Scan] → [Signing] → [Encrypted Storage]
                                                      ↓
[Canary Deploy] ← [Staged Rollout] ← [Integrity Check] ← [Pull]
       ↓
[Production] → [Continuous Monitoring]
```

## Security Stages

### 1. Pre-Deployment Checks

```yaml
Security Scans:
  - model_vulnerability_scan
  - dependency_audit
  - bias_evaluation
  - adversarial_robustness_test
  - pii_leak_detection
  - license_compliance
  - secrets_detection
```

```python
class PreDeploymentChecker:
    def run_all_checks(self, model_path):
        results = []

        # Dependency audit
        results.append(self.audit_dependencies(model_path))

        # Secrets detection
        results.append(self.scan_for_secrets(model_path))

        # PII leak detection
        results.append(self.detect_pii_leakage(model_path))

        # Adversarial robustness
        results.append(self.test_robustness(model_path))

        # Bias evaluation
        results.append(self.evaluate_bias(model_path))

        return results

    def audit_dependencies(self, path):
        """Check for vulnerable dependencies"""
        vulns = self.dependency_scanner.scan(path)
        critical = [v for v in vulns if v.severity == 'CRITICAL']
        if critical:
            return CheckResult("dependencies", "FAIL", critical)
        return CheckResult("dependencies", "PASS")

    def scan_for_secrets(self, path):
        """Detect hardcoded secrets"""
        secrets = self.secret_scanner.scan(path)
        if secrets:
            return CheckResult("secrets", "FAIL", secrets)
        return CheckResult("secrets", "PASS")
```

### 2. Deployment Configuration

```yaml
Container Security:
  base_image: distroless/python3
  user: nonroot (UID 65532)
  filesystem: read-only
  capabilities: drop ALL
  seccomp: runtime/default

Network Security:
  ingress: API gateway only
  egress: allowlist only
  mtls: required
  network_policy: strict

Secrets Management:
  provider: HashiCorp Vault
  injection: sidecar
  rotation: 24 hours
  never_in_env: true

Model Storage:
  encryption: AES-256-GCM
  signing: RSA-4096
  integrity: SHA-256 hash
  access: RBAC enforced
```

```python
# Kubernetes deployment security
SECURE_DEPLOYMENT = """
apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      securityContext:
        runAsNonRoot: true
        runAsUser: 65532
        fsGroup: 65532
        seccompProfile:
          type: RuntimeDefault

      containers:
      - name: model-server
        image: distroless/python3:nonroot
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          capabilities:
            drop: ["ALL"]
        resources:
          limits:
            cpu: "4"
            memory: "16Gi"
            nvidia.com/gpu: "1"
          requests:
            cpu: "2"
            memory: "8Gi"
        volumeMounts:
        - name: model
          mountPath: /model
          readOnly: true
        - name: tmp
          mountPath: /tmp
"""
```

### 3. Runtime Protection

```yaml
Isolation:
  runtime: gvisor
  network: namespace isolated
  process: pid namespace

Monitoring:
  logging: structured JSON
  metrics: Prometheus
  tracing: OpenTelemetry
  alerts: PagerDuty

Resource Protection:
  cpu_limit: enforced
  memory_limit: enforced
  gpu_memory: enforced
  timeout: 30 seconds
```

```python
class RuntimeProtection:
    def __init__(self):
        self.timeout = 30  # seconds
        self.max_memory = 16 * 1024**3  # 16GB
        self.rate_limiter = RateLimiter()

    def protected_inference(self, model, input_data, user_id):
        # Rate limiting
        if not self.rate_limiter.allow(user_id):
            raise RateLimitError()

        # Timeout protection
        with timeout(self.timeout):
            # Memory monitoring
            with memory_limit(self.max_memory):
                result = model.infer(input_data)

        # Log the request
        self.log_inference(user_id, input_data, result)

        return result
```

### 4. Staged Rollout

```yaml
Rollout Strategy:
  canary:
    initial_percentage: 5%
    increment: 10%
    interval: 1 hour
    success_criteria:
      - error_rate < 0.1%
      - latency_p99 < 5s
      - no_security_alerts

  rollback:
    automatic: true
    triggers:
      - error_rate > 1%
      - security_alert
      - latency_p99 > 10s
```

## Security Checklist

```yaml
Pre-Deployment:
  - [ ] Dependencies scanned and patched
  - [ ] Secrets removed from codebase
  - [ ] PII leak testing passed
  - [ ] Adversarial robustness validated
  - [ ] Model signed and verified
  - [ ] Access controls configured

Deployment:
  - [ ] Non-root container
  - [ ] Read-only filesystem
  - [ ] Resource limits set
  - [ ] Network policies applied
  - [ ] Secrets via vault
  - [ ] TLS/mTLS enabled

Runtime:
  - [ ] Monitoring enabled
  - [ ] Alerting configured
  - [ ] Logging comprehensive
  - [ ] Rate limiting active
  - [ ] Rollback tested
```

## CI/CD Security Gates

```yaml
# .github/workflows/secure-deploy.yml
name: Secure Deployment

jobs:
  security-scan:
    steps:
      - name: Dependency Audit
        run: pip-audit --strict

      - name: Secret Scan
        run: gitleaks detect

      - name: Container Scan
        run: trivy image $IMAGE

      - name: SBOM Generation
        run: syft $IMAGE -o spdx-json

  deploy:
    needs: security-scan
    steps:
      - name: Sign Image
        run: cosign sign $IMAGE

      - name: Verify Signature
        run: cosign verify $IMAGE

      - name: Deploy Canary
        run: kubectl apply -f canary.yaml
```

## Severity Classification

```yaml
CRITICAL:
  - Secrets in codebase
  - Critical vulnerabilities
  - No authentication

HIGH:
  - Root container
  - Missing encryption
  - No rate limiting

MEDIUM:
  - Missing resource limits
  - Incomplete logging
  - Outdated dependencies

LOW:
  - Non-optimal configs
  - Missing SBOM
```

## Troubleshooting

```yaml
Issue: Deployment failing security scan
Solution: Update dependencies, remove secrets, fix configs

Issue: Container won't start (read-only FS)
Solution: Use tmpfs for temp files, volume for model

Issue: High latency after security layers
Solution: Optimize validation, use caching, async logging
```

## Integration Points

| Component | Purpose |
|-----------|---------|
| Agent 06 | Security testing |
| Agent 08 | CI/CD automation |
| /test api | Pre-deploy testing |
| ArgoCD | GitOps deployment |

---

**Deploy AI models securely with defense-in-depth practices.**

Overview

This skill provides actionable security best practices for deploying AI/ML models to production using a defense-in-depth and zero-trust approach. It codifies checks for pre-deployment scanning, secure container and network configuration, runtime protections, and staged rollout patterns. The goal is to reduce supply-chain, data leakage, and runtime risks while enabling safe, auditable rollouts.

How this skill works

The skill inspects model artifacts, dependencies, and deployment manifests with automated scanners for vulnerabilities, secrets, PII leakage, bias, and adversarial robustness. It enforces signing and encryption of model binaries, builds hardened container configurations, and wires runtime controls like timeouts, rate limits, and resource caps. Deployment is staged via canaries with automatic rollback triggers and continuous monitoring for metrics, logs, and alerts.

When to use it

  • Before promoting a trained model from staging to production to verify security posture.
  • When integrating models into CI/CD pipelines to enforce gate checks and signing.
  • Deploying models that handle sensitive data or operate in regulated environments.
  • Rolling out large-model updates where staged canary deployments reduce blast radius.
  • When you need to demonstrate auditability, integrity, and RBAC for model artifacts.

Best practices

  • Run dependency audits, secret scans, PII leakage tests, bias checks, and adversarial robustness tests as pre-deploy gates.
  • Sign models and images (e.g., RSA-4096 + cosign) and store binaries encrypted with strong algorithms (AES-256-GCM).
  • Use non-root, distroless base images, read-only root filesystems, drop Linux capabilities, and apply seccomp profiles.
  • Limit network egress to an allowlist, require mTLS for ingress, and enforce strict network policies per namespace.
  • Enforce runtime controls: timeouts, memory/GPU limits, rate limiting, and use sandboxing (gVisor) or namespaces.
  • Implement canary rollouts with clear success/failure criteria and automatic rollback on security alerts or performance regressions.

Example use cases

  • CI/CD pipeline that blocks deployment until dependency and secret scans pass and the image is signed.
  • Production model server deployed on Kubernetes with non-root distroless image, read-only filesystem, and Vault-injected secrets.
  • Canary rollout for a new model version with automated rollback if error rate or security alerts exceed thresholds.
  • Runtime protection for hosted inference: rate limiting per user, 30s timeout, memory caps, and structured JSON logging to SIEM.
  • Generating SBOMs and running container image scans as part of supply-chain risk assessments.

FAQ

What are the minimum pre-deployment checks to enforce?

At minimum: dependency audit, secret detection, PII leakage tests, model signing, and RBAC for model storage.

How should secrets be provided to model servers?

Store secrets in a secrets manager (e.g., Vault) and inject at runtime via sidecar or CSI driver; never bake secrets into images or env vars.

When should I use canary vs full rollout?

Start with a canary for any model change that affects behavior or handles sensitive data; proceed to wider rollout only after passing success criteria over a defined interval.