home / skills / jeremylongshore / claude-code-plugins-plus-skills / juicebox-incident-runbook

This skill guides you through Juicebox incident response, enabling rapid diagnosis, containment, and recovery following standardized procedures.

npx playbooks add skill jeremylongshore/claude-code-plugins-plus-skills --skill juicebox-incident-runbook

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
6.3 KB
---
name: juicebox-incident-runbook
description: |
  Execute Juicebox incident response procedures.
  Use when responding to production incidents, troubleshooting outages,
  or following incident management protocols.
  Trigger with phrases like "juicebox incident", "juicebox outage",
  "juicebox down", "juicebox emergency".
allowed-tools: Read, Write, Edit, Bash(kubectl:*), Bash(curl:*)
version: 1.0.0
license: MIT
author: Jeremy Longshore <[email protected]>
---

# Juicebox Incident Runbook

## Overview
Standardized incident response procedures for Juicebox integration issues.

## Incident Severity Levels

| Severity | Description | Response Time | Examples |
|----------|-------------|---------------|----------|
| P1 | Critical | < 15 min | Complete outage, data loss |
| P2 | High | < 1 hour | Major feature broken, degraded performance |
| P3 | Medium | < 4 hours | Minor feature issue, workaround exists |
| P4 | Low | < 24 hours | Cosmetic, non-blocking |

## Quick Diagnostics

### Step 1: Immediate Assessment
```bash
#!/bin/bash
# quick-diag.sh - Run immediately when incident detected

echo "=== Juicebox Quick Diagnostics ==="
echo "Timestamp: $(date -u +%Y-%m-%dT%H:%M:%SZ)"

# Check Juicebox status page
echo ""
echo "=== Juicebox Status ==="
curl -s https://status.juicebox.ai/api/status | jq '.status'

# Check our API health
echo ""
echo "=== Our API Health ==="
curl -s http://localhost:8080/health/ready | jq '.'

# Check recent error logs
echo ""
echo "=== Recent Errors (last 5 min) ==="
kubectl logs -l app=juicebox-integration --since=5m | grep -i error | tail -20

# Check metrics
echo ""
echo "=== Error Rate ==="
curl -s http://localhost:9090/api/v1/query?query=rate\(juicebox_requests_total\{status=\"error\"\}\[5m\]\) | jq '.data.result[0].value[1]'
```

### Step 2: Identify Root Cause
```markdown
## Incident Triage Decision Tree

1. Is Juicebox status page showing issues?
   - YES → External outage, skip to "External Outage Response"
   - NO → Continue

2. Are we getting authentication errors (401)?
   - YES → Check API key validity, skip to "Auth Issues"
   - NO → Continue

3. Are we getting rate limited (429)?
   - YES → Skip to "Rate Limit Response"
   - NO → Continue

4. Are requests timing out?
   - YES → Skip to "Timeout Response"
   - NO → Continue

5. Are we getting unexpected errors?
   - YES → Skip to "Application Error Response"
   - NO → Gather more data
```

## Response Procedures

### External Outage Response
```markdown
## When Juicebox is Down

1. **Confirm Outage**
   - Check https://status.juicebox.ai
   - Verify with curl test to API

2. **Enable Fallback Mode**
   ```bash
   kubectl set env deployment/juicebox-integration JUICEBOX_FALLBACK=true
   ```

3. **Notify Stakeholders**
   - Post to #incidents channel
   - Update status page if customer-facing

4. **Monitor Recovery**
   - Set up alert for Juicebox status change
   - Prepare to disable fallback mode

5. **Post-Incident**
   - Disable fallback when Juicebox recovers
   - Document timeline and impact
```

### Auth Issues Response
```markdown
## When Authentication Fails

1. **Verify API Key**
   ```bash
   # Mask key for logging
   echo "Key prefix: ${JUICEBOX_API_KEY:0:10}..."

   # Test auth
   curl -H "Authorization: Bearer $JUICEBOX_API_KEY" \
     https://api.juicebox.ai/v1/auth/me
   ```

2. **Check Key Status in Dashboard**
   - Log into https://app.juicebox.ai
   - Verify key is active and not revoked

3. **Rotate Key if Compromised**
   - Generate new key in dashboard
   - Update secret manager
   - Restart pods
   ```bash
   kubectl rollout restart deployment/juicebox-integration
   ```

4. **Verify Recovery**
   - Check health endpoint
   - Monitor error rate
```

### Rate Limit Response
```markdown
## When Rate Limited

1. **Check Current Usage**
   ```bash
   curl -H "Authorization: Bearer $JUICEBOX_API_KEY" \
     https://api.juicebox.ai/v1/usage
   ```

2. **Immediate Mitigation**
   - Enable aggressive caching
   - Reduce request rate
   ```bash
   kubectl set env deployment/juicebox-integration JUICEBOX_RATE_LIMIT=10
   ```

3. **If Quota Exhausted**
   - Contact Juicebox support for temporary increase
   - Implement request queuing

4. **Long-term Fix**
   - Review usage patterns
   - Implement better caching
   - Consider plan upgrade
```

### Timeout Response
```markdown
## When Requests Timeout

1. **Check Network**
   ```bash
   # DNS resolution
   nslookup api.juicebox.ai

   # Connectivity
   curl -v --connect-timeout 5 https://api.juicebox.ai/v1/health
   ```

2. **Check Load**
   - Review query complexity
   - Check for unusually large requests

3. **Increase Timeout**
   ```bash
   kubectl set env deployment/juicebox-integration JUICEBOX_TIMEOUT=60000
   ```

4. **Implement Circuit Breaker**
   - Enable circuit breaker if repeated timeouts
   - Serve cached/fallback data
```

## Incident Communication Template

```markdown
## Incident Report Template

**Incident ID:** INC-YYYY-MM-DD-XXX
**Status:** Investigating | Identified | Monitoring | Resolved
**Severity:** P1 | P2 | P3 | P4
**Start Time:** YYYY-MM-DD HH:MM UTC
**End Time:** (when resolved)

### Summary
[Brief description of the incident]

### Impact
- Users affected: [number/percentage]
- Features affected: [list]
- Duration: [time]

### Timeline
- HH:MM - Incident detected
- HH:MM - Investigation started
- HH:MM - Root cause identified
- HH:MM - Mitigation applied
- HH:MM - Incident resolved

### Root Cause
[Description of what caused the incident]

### Resolution
[What was done to fix it]

### Action Items
- [ ] Action 1 (Owner, Due Date)
- [ ] Action 2 (Owner, Due Date)
```

## On-Call Checklist

```markdown
## On-Call Handoff Checklist

### Before Shift
- [ ] Access to all monitoring dashboards
- [ ] VPN/access to production systems
- [ ] Runbook bookmarked
- [ ] Escalation contacts available

### During Shift
- [ ] Check dashboards every 30 min
- [ ] Respond to alerts within SLA
- [ ] Document all incidents
- [ ] Escalate P1/P2 immediately

### End of Shift
- [ ] Handoff open incidents
- [ ] Update incident log
- [ ] Brief incoming on-call
```

## Output
- Diagnostic scripts
- Response procedures
- Communication templates
- On-call checklists

## Resources
- [Juicebox Status](https://status.juicebox.ai)
- [Support Portal](https://juicebox.ai/support)

## Next Steps
After incident, see `juicebox-data-handling` for data management.

Overview

This skill executes the Juicebox incident response runbook to handle integration outages, authentication failures, rate limits, and timeouts. It provides rapid diagnostics, decision trees for triage, and step-by-step remediation actions. Use it to standardize incident handling, communications, and on-call procedures for Juicebox-related production issues.

How this skill works

The skill runs quick diagnostics (status checks, health endpoints, recent error logs, and metrics) and walks responders through a decision tree to identify root causes. For each class of problem—external outage, auth issues, rate limiting, timeouts, or application errors—it gives concrete remediation steps, commands to apply, and monitoring checks. It also supplies incident communication templates and an on-call checklist to ensure consistent reporting and handoffs.

When to use it

  • When Juicebox integration causes a production outage or degraded user experience
  • If you observe increased error rates or timeouts to Juicebox endpoints
  • When authentication failures or API key issues appear in logs
  • If requests are being rate limited (429) or quotas are exhausted
  • During on-call shifts to guide triage and escalation procedures

Best practices

  • Run the quick diagnostics script immediately to capture current state and timestamps
  • Follow the triage decision tree to quickly classify incidents and apply the appropriate response path
  • Mask secrets when logging; rotate compromised API keys and update secret manager before restarting pods
  • Enable fallback mode for external outages and monitor status changes before reverting
  • Document timeline, impact, root cause, and action items in the incident report template for post-incident review

Example use cases

  • P1 outage: Juicebox API unreachable for all users — enable fallback mode, notify stakeholders, and monitor recovery
  • Auth regression: Integration begins returning 401 — validate API key, rotate if revoked, restart deployment
  • Rate limit spike: Frequent 429 responses — tighten caching, reduce request rate, and contact Juicebox support if quota exhausted
  • Timeouts under load: Intermittent request timeouts — check DNS/connectivity, increase client timeout, and enable circuit breaker

FAQ

What initial checks should I run immediately?

Run the quick diagnostics: status.juicebox.ai, local readiness health, recent error logs, and error-rate metric queries to capture evidence and severity.

How do I handle a confirmed external outage?

Enable the deployment fallback mode, notify incident channels and stakeholders, monitor the Juicebox status page, and revert fallback when the vendor confirms recovery.