home / skills / proffesor-for-testing / agentic-qe / risk-based-testing

risk-based-testing skill

/v3/assets/skills/risk-based-testing

This skill helps teams prioritize testing by assessing risk, allocating effort to critical areas, and adapting strategies as new information arrives.

npx playbooks add skill proffesor-for-testing/agentic-qe --skill risk-based-testing

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
6.2 KB
---
name: risk-based-testing
description: "Focus testing effort on highest-risk areas using risk assessment and prioritization. Use when planning test strategy, allocating testing resources, or making coverage decisions."
category: testing-methodologies
priority: high
tokenEstimate: 1000
agents: [qe-regression-risk-analyzer, qe-test-generator, qe-production-intelligence, qe-quality-gate]
implementation_status: optimized
optimization_version: 1.0
last_optimized: 2025-12-02
dependencies: []
quick_reference_card: true
tags: [risk, prioritization, test-planning, coverage, impact-analysis]
---

# Risk-Based Testing

<default_to_action>
When planning tests or allocating testing resources:
1. IDENTIFY risks: What can go wrong? What's the impact? What's the likelihood?
2. CALCULATE risk: Risk = Probability × Impact (use 1-5 scale for each)
3. PRIORITIZE: Critical (20+) → High (12-19) → Medium (6-11) → Low (1-5)
4. ALLOCATE effort: 60% critical, 25% high, 10% medium, 5% low
5. REASSESS continuously: New info, changes, production incidents

**Quick Risk Assessment:**
- Probability factors: Complexity, change frequency, developer experience, technical debt
- Impact factors: User count, revenue, safety, reputation, regulatory
- Dynamic adjustment: Production bugs increase risk; stable code decreases

**Critical Success Factors:**
- Test where bugs hurt most, not everywhere equally
- Risk is dynamic - reassess with new information
- Production data informs risk (shift-right feeds shift-left)
</default_to_action>

## Quick Reference Card

### When to Use
- Planning sprint/release test strategy
- Deciding what to automate first
- Allocating limited testing time
- Justifying test coverage decisions

### Risk Calculation
```
Risk Score = Probability (1-5) × Impact (1-5)
```

| Score | Priority | Effort | Action |
|-------|----------|--------|--------|
| 20-25 | Critical | 60% | Comprehensive testing, multiple techniques |
| 12-19 | High | 25% | Thorough testing, automation priority |
| 6-11 | Medium | 10% | Standard testing, basic automation |
| 1-5 | Low | 5% | Smoke test, exploratory only |

### Probability Factors
| Factor | Low (1) | Medium (3) | High (5) |
|--------|---------|------------|----------|
| Complexity | Simple CRUD | Business logic | Algorithms, integrations |
| Change Rate | Stable 6+ months | Monthly changes | Weekly/daily changes |
| Developer Experience | Senior, domain expert | Mid-level | Junior, new to codebase |
| Technical Debt | Clean code | Some debt | Legacy, no tests |

### Impact Factors
| Factor | Low (1) | Medium (3) | High (5) |
|--------|---------|------------|----------|
| Users Affected | Admin only | Department | All users |
| Revenue | None | Indirect | Direct (checkout) |
| Safety | Convenience | Data loss | Physical harm |
| Reputation | Internal | Industry | Public scandal |

---

## Risk Assessment Workflow

### Step 1: List Features/Components
```
Feature | Probability | Impact | Risk | Priority
--------|-------------|--------|------|----------
Checkout | 4 | 5 | 20 | Critical
User Auth | 3 | 5 | 15 | High
Admin Panel | 2 | 2 | 4 | Low
Search | 3 | 3 | 9 | Medium
```

### Step 2: Apply Test Depth
```typescript
await Task("Risk-Based Test Generation", {
  critical: {
    features: ['checkout', 'payment'],
    depth: 'comprehensive',
    techniques: ['unit', 'integration', 'e2e', 'performance', 'security']
  },
  high: {
    features: ['auth', 'user-profile'],
    depth: 'thorough',
    techniques: ['unit', 'integration', 'e2e']
  },
  medium: {
    features: ['search', 'notifications'],
    depth: 'standard',
    techniques: ['unit', 'integration']
  },
  low: {
    features: ['admin-panel', 'settings'],
    depth: 'smoke',
    techniques: ['smoke-tests']
  }
}, "qe-test-generator");
```

### Step 3: Reassess Dynamically
```typescript
// Production incident increases risk
await Task("Update Risk Score", {
  feature: 'search',
  event: 'production-incident',
  previousRisk: 9,
  newProbability: 5,  // Increased due to incident
  newRisk: 15         // Now HIGH priority
}, "qe-regression-risk-analyzer");
```

---

## ML-Enhanced Risk Analysis

```typescript
// Agent predicts risk using historical data
const riskAnalysis = await Task("ML Risk Analysis", {
  codeChanges: changedFiles,
  historicalBugs: bugDatabase,
  prediction: {
    model: 'gradient-boosting',
    factors: ['complexity', 'change-frequency', 'author-experience', 'file-age']
  }
}, "qe-regression-risk-analyzer");

// Output: 95% accuracy risk prediction per file
```

---

## Agent Coordination Hints

### Memory Namespace
```
aqe/risk-based/
├── risk-scores/*        - Current risk assessments
├── historical-bugs/*    - Bug patterns by area
├── production-data/*    - Incident data for risk
└── coverage-map/*       - Test depth by risk level
```

### Fleet Coordination
```typescript
const riskFleet = await FleetManager.coordinate({
  strategy: 'risk-based-testing',
  agents: [
    'qe-regression-risk-analyzer',  // Risk scoring
    'qe-test-generator',            // Risk-appropriate tests
    'qe-production-intelligence',   // Production feedback
    'qe-quality-gate'               // Risk-based gates
  ],
  topology: 'sequential'
});
```

---

## Integration with CI/CD

```yaml
# Risk-based test selection in pipeline
- name: Risk Analysis
  run: aqe risk-analyze --changes ${{ github.event.pull_request.files }}

- name: Run Critical Tests
  if: risk.critical > 0
  run: npm run test:critical

- name: Run High Tests
  if: risk.high > 0
  run: npm run test:high

- name: Skip Low Risk
  if: risk.low_only
  run: npm run test:smoke
```

---

## Related Skills
- [agentic-quality-engineering](../agentic-quality-engineering/) - Risk-aware agents
- [context-driven-testing](../context-driven-testing/) - Context affects risk
- [regression-testing](../regression-testing/) - Risk-based regression selection
- [shift-right-testing](../shift-right-testing/) - Production informs risk

---

## Remember

**Risk = Probability × Impact.** Test where bugs hurt most. Critical gets 60%, low gets 5%. Risk is dynamic - reassess with new info. Production incidents raise risk scores.

**With Agents:** Agents calculate risk using ML on historical data, select risk-appropriate tests, and adjust scores from production feedback. Use agents to maintain dynamic risk profiles at scale.

Overview

This skill focuses testing effort on the highest-risk areas by combining simple risk scoring, prioritization, and continuous reassessment. It provides patterns and automation hooks to calculate Probability × Impact, assign priorities, and map test depth and resource allocation. Use it to make defensible, data-driven coverage and allocation decisions across the SDLC.

How this skill works

The skill inspects features, changes, historical bugs, and production incidents to compute a risk score using a 1–5 scale for probability and impact. Scores are multiplied to produce risk bands (Critical, High, Medium, Low) and the skill recommends test depth and effort allocation per band. It supports dynamic updates (production events or code churn), ML-enhanced predictions from historical data, and integration points for CI/CD to run risk-appropriate test sets.

When to use it

  • Planning sprint or release test strategy and defining scope
  • Deciding what to automate first under limited time
  • Selecting regression tests in CI/CD based on recent changes
  • Reprioritizing tests after production incidents or hotfixes
  • Justifying test coverage and resource allocation to stakeholders

Best practices

  • Score Probability and Impact on a 1–5 scale and multiply for a clear, repeatable metric
  • Allocate effort by priority (suggested: 60% critical, 25% high, 10% medium, 5% low) and adjust to context
  • Reassess risk continuously—use production data and recent failures to raise scores
  • Combine automated unit/integration/e2e tests for critical areas and lighter techniques for low risk
  • Integrate risk analysis into CI to select test suites dynamically and save pipeline time

Example use cases

  • During sprint planning, score all features and target comprehensive tests for critical items like checkout
  • In CI, run only critical and high-risk suites for fast feedback on risky changes
  • After a production bug in search, increase its probability and escalate test depth to high
  • Use ML models on historical bugs to predict file- or component-level risk and focus reviews
  • Allocate automation efforts to high-impact flows first (payment, auth) and smoke-test low-risk areas

FAQ

How do I compute the risk score quickly?

Rate Probability and Impact from 1 to 5, then multiply: Risk = Probability × Impact. Use thresholds to map scores to Critical (20–25), High (12–19), Medium (6–11), Low (1–5).

How strict should I follow the recommended effort split?

Treat 60/25/10/5 as a guideline. Adjust based on team capacity, regulatory needs, and product context; always preserve extra effort for critical items.