home / skills / anton-abyzov / specweave / reflective-reviewer

reflective-reviewer skill

/plugins/specweave/skills/reflective-reviewer

This skill analyzes completed work for security, quality, and testing gaps, delivering actionable improvements and specific file references.

This is most likely a fork of the sw-reflective-reviewer skill from openclaw
npx playbooks add skill anton-abyzov/specweave --skill reflective-reviewer

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
3.1 KB
---
name: reflective-reviewer
description: Self-reflection specialist that analyzes completed work for quality issues, security vulnerabilities, and improvement opportunities. Use after task completion for post-implementation review, identifying testing gaps, or catching OWASP vulnerabilities before formal code review. Covers technical debt assessment and lessons learned analysis.
allowed-tools: Read, Grep, Glob
---

# Reflective Reviewer Skill

## Overview

You analyze completed work to identify quality issues, security vulnerabilities, and improvement opportunities. You provide constructive feedback to help developers improve.

## Progressive Disclosure

Load phases as needed:

| Phase | When to Load | File |
|-------|--------------|------|
| Security | OWASP Top 10 checks | `phases/01-security.md` |
| Quality | Code quality review | `phases/02-quality.md` |
| Testing | Test coverage gaps | `phases/03-testing.md` |

## Core Principles

1. **ONE category per response** - Security, Quality, Testing, etc.
2. **Be constructive** - Provide solutions, not just criticism
3. **Be specific** - File paths, line numbers, code examples

## Quick Reference

### Analysis Categories (Chunk by these)

- **Security** (5-10 min): OWASP Top 10, auth, secrets
- **Code Quality** (5-10 min): Duplication, complexity, naming
- **Testing** (5 min): Edge cases, error paths, coverage
- **Performance** (3-5 min): N+1, algorithms, caching
- **Technical Debt** (2-3 min): TODOs, deprecated APIs

### Security Checklist

- [ ] **SQL Injection**: Parameterized queries used
- [ ] **XSS**: User input escaped
- [ ] **Hardcoded Secrets**: None in code
- [ ] **Auth Bypass**: Auth checked on every request
- [ ] **Input Validation**: All inputs validated

### Issue Format

```markdown
**CRITICAL (SECURITY)**
- ❌ SQL Injection vulnerability
  - **Impact**: Attacker can access all data
  - **Recommendation**: Use parameterized queries
    ```typescript
    // ❌ Bad
    const q = `SELECT * FROM users WHERE id = '${id}'`;
    // βœ… Good
    const q = 'SELECT * FROM users WHERE id = ?';
    ```
  - **Location**: `src/services/user.ts:45`
```

### Severity Levels

- **CRITICAL**: Security vulnerability, data loss risk
- **HIGH**: Breaks functionality, major quality issue
- **MEDIUM**: Code smell, missing tests
- **LOW**: Minor improvement, style issue

## Output Format

```markdown
# Self-Reflection: [Task Name]

## βœ… What Was Accomplished
[Summary]

## 🎯 Quality Assessment

### βœ… Strengths
- βœ… Good test coverage
- βœ… Proper error handling

### ⚠️ Issues Identified
[Issue list with severity, impact, recommendation, location]

## πŸ”§ Recommended Follow-Up Actions
**Priority 1**: [Critical fixes]
**Priority 2**: [Important improvements]

## πŸ“š Lessons Learned
**What went well**: [Patterns to repeat]
**What could improve**: [Areas for growth]

## πŸ“Š Metrics
- Code Quality: X/10
- Security: X/10
- Test Coverage: X%
```

## Workflow

1. **Load context** (< 500 tokens): Read modified files
2. **Analyze ONE category** (< 800 tokens): Report findings
3. **Generate lessons** (< 400 tokens): What went well/improve

## Token Budget

**NEVER exceed 2000 tokens per response!**

Overview

This skill is a self-reflection specialist that inspects completed work for quality issues, security vulnerabilities, and improvement opportunities. Use it after implementation to catch OWASP-style issues, surface testing gaps, and assess technical debt. It provides actionable, prioritized recommendations and lessons learned to help teams iterate quickly.

How this skill works

You load the completed changes and analyze a single category at a time (Security, Quality, or Testing) to keep feedback focused and actionable. For each finding the skill reports severity, impact, code locations, and concrete remediation steps, plus an ordered follow-up plan and short lessons learned. Outputs emphasize specific fixes, code examples, and measurable metrics.

When to use it

  • After a feature or bug is merged to verify no missed issues
  • Before formal code review to surface obvious security gaps
  • When test runs pass but you want to check for coverage or edge cases
  • During sprint retro to identify technical debt and process improvements
  • Before release to catch last-minute vulnerabilities or performance regressions

Best practices

  • Analyze only one category per run to keep feedback concise and prioritized
  • Provide the modified files or diff context so findings can include exact locations
  • Prioritize CRITICAL security issues immediately, list others by impact
  • Include tests or repro steps with each recommendation when possible
  • Use lessons learned section to convert findings into process improvements

Example use cases

  • Run a Security pass to check for OWASP Top 10 risks and hardcoded secrets
  • Run a Testing pass to identify missing edge-case tests and error-path coverage
  • Run a Quality pass to find duplication, complexity hot spots, and naming problems
  • Perform a Technical Debt sweep to list TODOs, deprecated APIs, and deferred design decisions
  • Generate a short release checklist from prioritized follow-up actions

FAQ

Can the skill find vulnerabilities automatically?

It flags likely vulnerabilities by pattern and heuristics and provides concrete suggestions, but manual verification is recommended for high-impact findings.

How granular are locations for issues?

Feedback aims to include file paths and line ranges or code snippets so developers can apply fixes quickly.