home / skills / masanao-ohba / claude-manifests / requirement-analyzer

requirement-analyzer skill

/skills/generic/requirement-analyzer

This skill analyzes and documents software requirements, decomposing into functional, non-functional, and constraints to enable clear, testable specs.

npx playbooks add skill masanao-ohba/claude-manifests --skill requirement-analyzer

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
11.9 KB
---
name: requirement-analyzer
description: Analyzes and documents requirements for any software project, technology-agnostic approach to requirement engineering
---

# Generic Requirement Analyzer

A technology-agnostic skill for analyzing, categorizing, and documenting software requirements for any type of project.

## Core Responsibilities

### 1. Requirement Decomposition

Break down requirements into universal categories:

```yaml
Requirement Categories:
  Functional:
    - What the system must do
    - User interactions
    - Business logic
    - Data processing

  Non-Functional:
    - Performance targets
    - Security requirements
    - Usability standards
    - Reliability metrics

  Constraints:
    - Technical limitations
    - Business rules
    - Regulatory compliance
    - Resource boundaries
```

### 2. Requirement Analysis Framework

**Universal User Story Format:**
```yaml
User Story:
  as_a: [user role/persona]
  i_want: [feature/capability]
  so_that: [business value/benefit]

Acceptance Criteria:
  given: [initial context/state]
  when: [action/trigger]
  then: [expected outcome/result]
```

**Requirement Specification Template:**
```yaml
Requirement:
  id: REQ-[number]
  title: [brief description]
  type: functional|non-functional|constraint
  priority: must|should|could|wont
  description: [detailed description]
  rationale: [why this is needed]
  source: [stakeholder/document]
  acceptance_criteria:
    - [measurable criterion 1]
    - [measurable criterion 2]
  dependencies:
    - [related requirement ids]
  risks:
    - [potential risks]
  assumptions:
    - [underlying assumptions]
```

### 3. Priority Frameworks

**MoSCoW Method:**
```yaml
Must Have:
  - Critical for launch
  - System fails without it
  - Legal/regulatory requirement

Should Have:
  - Important but not vital
  - Workaround exists
  - High value for users

Could Have:
  - Desirable but not necessary
  - Nice to have
  - Can be postponed

Won't Have (this time):
  - Out of scope
  - Future consideration
  - Explicitly excluded
```

**Kano Model:**
```yaml
Basic Needs:
  - Expected by users
  - Causes dissatisfaction if missing

Performance Needs:
  - More is better
  - Linear satisfaction

Excitement Needs:
  - Delighters
  - Unexpected features
  - Competitive advantage
```

### 4. Requirement Quality Criteria

**SMART Requirements:**
```yaml
Specific:
  - Clear and unambiguous
  - Well-defined scope

Measurable:
  - Quantifiable success criteria
  - Testable conditions

Achievable:
  - Technically feasible
  - Resource realistic

Relevant:
  - Aligns with business goals
  - Provides value

Time-bound:
  - Has deadline/milestone
  - Time constraints defined
```

**Completeness Checklist:**
```yaml
Requirement Completeness:
  - [ ] Clear description
  - [ ] Defined acceptance criteria
  - [ ] Identified stakeholder
  - [ ] Priority assigned
  - [ ] Dependencies mapped
  - [ ] Risks assessed
  - [ ] Assumptions documented
  - [ ] Traceability established
```

### 5. Stakeholder Analysis

**Stakeholder Mapping:**
```yaml
Stakeholder Analysis:
  identification:
    - End users
    - Business owners
    - Technical team
    - Regulatory bodies
    - External partners

  influence_interest_matrix:
    high_influence_high_interest:
      - Key players
      - Manage closely

    high_influence_low_interest:
      - Keep satisfied
      - Regular updates

    low_influence_high_interest:
      - Keep informed
      - Regular communication

    low_influence_low_interest:
      - Monitor
      - Minimal effort
```

### 6. Requirement Validation

**Validation Techniques:**
```yaml
Review Methods:
  walkthrough:
    - Informal review
    - Author-led
    - Educational focus

  inspection:
    - Formal review
    - Defined roles
    - Defect detection

  prototype_validation:
    - Visual representation
    - User feedback
    - Early validation
```

### 7. Achievement Indicators Framework

**Achievement Indicators Generation:**
```yaml
Achievement Indicator Structure:
  functional_completeness:
    - indicator: "All user stories completed"
      measurable: "Story points delivered"
      threshold: "100% of committed stories"
      priority: high

    - indicator: "API contracts fulfilled"
      measurable: "Endpoint tests pass"
      threshold: "All endpoints return expected data"
      priority: high

  quality_standards:
    - indicator: "Code quality maintained"
      measurable: "Lint score"
      threshold: "> 90/100"
      priority: medium

    - indicator: "Test coverage adequate"
      measurable: "Coverage percentage"
      threshold: ">= 80%"
      priority: high

  performance_criteria:
    - indicator: "Response time acceptable"
      measurable: "95th percentile latency"
      threshold: "< 200ms"
      priority: medium

    - indicator: "Throughput sufficient"
      measurable: "Requests per second"
      threshold: "> 1000 RPS"
      priority: low

  security_requirements:
    - indicator: "Authentication secure"
      measurable: "Security audit pass"
      threshold: "No critical vulnerabilities"
      priority: high
```

**Success Criteria (Achievement Objectives) Generation:**
```yaml
Success Criteria Structure:
  implementation_goals:
    - objective: "Deliver core functionality"
      success_criteria: "User can complete primary workflow"
      measurement: "End-to-end test passes"

    - objective: "Maintain system stability"
      success_criteria: "No regression in existing features"
      measurement: "Regression test suite passes"

  quality_goals:
    - objective: "Ensure maintainability"
      success_criteria: "Code follows standards"
      measurement: "Code review approval"

    - objective: "Document adequately"
      success_criteria: "All public APIs documented"
      measurement: "Documentation coverage 100%"

  business_goals:
    - objective: "Meet user expectations"
      success_criteria: "User acceptance criteria met"
      measurement: "UAT sign-off received"

    - objective: "Enable future growth"
      success_criteria: "Architecture supports scaling"
      measurement: "Load test validates 10x capacity"
```

**Acceptance Criteria Document Generation Template:**
```yaml
# Acceptance Criteria Document
# Generated: {timestamp}
# Project: {project_name}
# Task: {task_description}
# Size: {trivial|small|medium|large}

acceptance_criteria:
  task_classification:
    size: {estimated_size}
    complexity: {low|medium|high}
    risk_level: {low|medium|high}

  indicators: # Achievement Indicators - What we measure
    completeness:
      - id: "AI-COMP-001"
        indicator: {description}
        measurement_method: {how_to_measure}
        pass_threshold: {specific_value}
        priority: {must|should|could}
        verification_agent: {which_agent_verifies}

    quality:
      - id: "AI-QUAL-001"
        indicator: {description}
        measurement_method: {automated|manual}
        pass_threshold: {metric_value}
        priority: {level}
        verification_agent: {agent_name}

    performance:
      - id: "AI-PERF-001"
        indicator: {description}
        measurement_method: {tool_or_process}
        pass_threshold: {numeric_threshold}
        priority: {level}
        verification_agent: {agent_name}

  objectives: # Success Criteria - What we aim to achieve
    primary:
      - id: "SC-PRIM-001"
        objective: {clear_goal}
        success_definition: {when_considered_achieved}
        responsible_agent: {implementation_agent}
        deadline: {if_applicable}

    secondary:
      - id: "SC-SEC-001"
        objective: {supporting_goal}
        success_definition: {criteria}
        responsible_agent: {agent_name}

  evaluation_protocol:
    stage_1_pre_implementation:
      - action: "Generate acceptance criteria document"
        responsible: "goal-clarifier"

    stage_2_implementation:
      - action: "Use acceptance criteria as implementation guide"
        responsible: "code-developer"

    stage_3_evaluation:
      - action: "Evaluate against achievement indicators"
        responsible: "deliverable-evaluator"
        input: "Acceptance criteria document"

    stage_4_iteration:
      - action: "If FAIL, return to stage 2 with feedback"
      - max_iterations: 3

  output_location: ".work/requirements/acceptance-criteria-{timestamp}.yaml"
```

**Validation Criteria:**
```yaml
Quality Attributes:
  correctness:
    - Accurately represents need
    - Factually correct

  consistency:
    - No conflicts
    - Uniform terminology

  completeness:
    - All aspects covered
    - No gaps

  feasibility:
    - Technically possible
    - Resource available

  testability:
    - Verifiable criteria
    - Measurable outcomes
```

## Analysis Process

### Step 1: Requirement Gathering
```yaml
Sources:
  - Stakeholder interviews
  - Existing documentation
  - Market research
  - Competitive analysis
  - User feedback
  - Regulatory requirements
```

### Step 2: Categorization
```yaml
Classification:
  1. Parse raw requirements
  2. Identify requirement type
  3. Assign categories
  4. Group related items
  5. Identify patterns
```

### Step 3: Analysis
```yaml
Analysis Activities:
  1. Ambiguity resolution
  2. Conflict identification
  3. Gap analysis
  4. Dependency mapping
  5. Risk assessment
```

### Step 4: Documentation
```yaml
Documentation Outputs:
  - Requirement specification
  - Traceability matrix
  - Stakeholder matrix
  - Risk register
  - Assumption log
```

## Output Formats

### Requirement Specification Document
```markdown
# Requirement Specification

## Executive Summary
[High-level overview of requirements]

## Functional Requirements
### FR-001: [Title]
- **Description**: [Detail]
- **Priority**: Must Have
- **Acceptance Criteria**:
  1. [Criterion 1]
  2. [Criterion 2]

## Non-Functional Requirements
### NFR-001: [Title]
- **Category**: Performance
- **Metric**: Response time < 2 seconds
- **Validation**: Load testing

## Constraints
### CON-001: [Title]
- **Type**: Technical
- **Description**: [Limitation details]
- **Impact**: [How it affects design]

## Dependencies
[Requirement dependency diagram]

## Risks and Assumptions
### Risks
- [Risk 1]: [Mitigation strategy]

### Assumptions
- [Assumption 1]: [Validation plan]
```

### Traceability Matrix
```yaml
Traceability:
  REQ-001:
    source: "User Interview #3"
    design_element: "Component A"
    test_case: "TC-001, TC-002"
    implementation: "Module X"

  REQ-002:
    source: "Regulatory Doc Section 4.2"
    design_element: "Security Layer"
    test_case: "TC-010"
    implementation: "Auth Module"
```

## Integration Points

### With Project Configuration
```yaml
# Project-specific configuration
requirement_config:
  project_type: "web_application|mobile_app|api|system"

  priority_framework: "moscow|kano|numerical"

  requirement_categories:
    - functional
    - non-functional
    - business
    - technical

  validation_requirements:
    review_type: "formal|informal"
    approval_levels: 2

  compliance_standards:
    - "ISO 27001"
    - "GDPR"
```

### With Technology-Specific Skills
```yaml
# Technology-specific extensions
tech_specific:
  framework: "[Framework name]"

  requirement_patterns:
    - "[Framework-specific pattern]"

  validation_rules:
    - "[Technology-specific rule]"
```

## Best Practices

1. **Stakeholder Engagement**: Involve all stakeholders early
2. **Clear Communication**: Use unambiguous language
3. **Measurable Criteria**: Define quantifiable success metrics
4. **Iterative Refinement**: Requirements evolve, plan for changes
5. **Traceability**: Maintain links from requirements to implementation
6. **Risk-Based Focus**: Prioritize based on risk and value
7. **Documentation**: Keep requirements current and accessible

## Universal Application

This skill can be applied to:
- Web applications
- Mobile applications
- Desktop software
- Embedded systems
- APIs and services
- Cloud platforms
- IoT solutions
- Enterprise systems
- Machine learning projects
- Any software development project

The skill adapts to project needs through configuration rather than modification, ensuring reusability across different technology stacks and domains.

Overview

This skill analyzes and documents software requirements using a technology-agnostic, repeatable approach. It decomposes raw inputs into functional, non-functional and constraint categories and generates clear, testable specification artifacts. The output is designed for traceability, validation and integration with project workflows.

How this skill works

It ingests stakeholder inputs, documents and research, then classifies each item by type, priority and dependencies. The skill applies user-story and acceptance-criteria templates, SMART quality checks, and MoSCoW/Kano prioritization to produce requirement specs, traceability matrices, and acceptance criteria documents. It also generates achievement indicators and validation protocols to drive implementation and evaluation.

When to use it

  • At project kickoff to translate stakeholder needs into structured requirements
  • When refining product roadmaps or preparing a release backlog
  • Before design or implementation to reduce ambiguity and technical risk
  • When compliance, security or performance constraints must be formalized
  • During vendor selection or procurement to define measurable acceptance criteria

Best practices

  • Engage all relevant stakeholders early and validate assumptions with prototypes
  • Use measurable acceptance criteria and SMART checks for every requirement
  • Assign priorities with MoSCoW or Kano and document trade-offs
  • Maintain traceability from source to test cases and implementation
  • Iterate requirements through review/inspection cycles and record decisions

Example use cases

  • Create a requirement specification for a web or mobile application including FRs, NFRs and constraints
  • Produce acceptance criteria and achievement indicators for a new API rollout
  • Map stakeholder influence and communications for regulatory or enterprise projects
  • Perform gap analysis and risk assessment before a large refactor or migration
  • Generate a traceability matrix to support testing and compliance audits

FAQ

What formats do outputs take?

Outputs include requirement specification documents, acceptance-criteria YAML, traceability matrices and stakeholder/risk registers suitable for markdown or YAML export.

Is this tied to any technology stack?

No. The approach is technology-agnostic and configurable for web, mobile, embedded, cloud, ML and other project types.

How are priorities assigned?

Priorities use MoSCoW by default and can be extended with Kano or numerical schemes; assignment combines stakeholder value and risk assessment.