home / skills / krosebrook / source-of-truth-monorepo / review-implementing

This skill processes and implements code review feedback methodically, generating actionable tasks and guiding systematic changes.

npx playbooks add skill krosebrook/source-of-truth-monorepo --skill review-implementing

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
4.2 KB
---
name: review-implementing
description: Process and implement code review feedback systematically. Use when user provides reviewer comments, PR feedback, code review notes, or asks to implement suggestions from reviews. Activates on phrases like "implement this feedback", "address review comments", or when user pastes reviewer notes.
---

# Review Feedback Implementation

Systematically process and implement changes based on code review feedback.

## When to Use

Automatically activate when the user:
- Provides reviewer comments or feedback
- Pastes PR review notes
- Mentions implementing review suggestions
- Says "address these comments" or "implement feedback"
- Shares a list of changes requested by reviewers

## Systematic Workflow

### 1. Parse Reviewer Notes

Identify individual feedback items:
- Split numbered lists (1., 2., etc.)
- Handle bullet points or unnumbered feedback
- Extract distinct change requests
- Clarify any ambiguous items before starting

### 2. Create Todo List

Use TodoWrite tool to create actionable tasks:
- Each feedback item becomes one or more todos
- Break down complex feedback into smaller tasks
- Make tasks specific and measurable
- Mark first task as `in_progress` before starting

Example:
```
- Add type hints to extract function
- Fix duplicate tag detection logic
- Update docstring in chain.py
- Add unit test for edge case
```

### 3. Implement Changes Systematically

For each todo item:

**Locate relevant code:**
- Use Grep to search for functions/classes
- Use Glob to find files by pattern
- Read current implementation

**Make changes:**
- Use Edit tool for modifications
- Follow project conventions (CLAUDE.md)
- Preserve existing functionality unless changing behavior

**Verify changes:**
- Check syntax correctness
- Run relevant tests if applicable
- Ensure changes address reviewer's intent

**Update status:**
- Mark todo as `completed` immediately after finishing
- Move to next todo (only one `in_progress` at a time)

### 4. Handle Different Feedback Types

**Code changes:**
- Use Edit tool for existing code
- Follow type hint conventions (PEP 604/585)
- Maintain consistent style

**New features:**
- Create new files with Write tool if needed
- Add corresponding tests
- Update documentation

**Documentation:**
- Update docstrings following project style
- Modify markdown files as needed
- Keep explanations concise

**Tests:**
- Write tests as functions, not classes
- Use descriptive names
- Follow pytest conventions

**Refactoring:**
- Preserve functionality
- Improve code structure
- Run tests to verify no regressions

### 5. Validation

After implementing changes:
- Run affected tests
- Check for linting errors: `uv run ruff check`
- Verify changes don't break existing functionality

### 6. Communication

Keep user informed:
- Update todo list in real-time
- Ask for clarification on ambiguous feedback
- Report blockers or challenges
- Summarize changes at completion

## Edge Cases

**Conflicting feedback:**
- Ask user for guidance
- Explain the conflict clearly

**Breaking changes required:**
- Notify user before implementing
- Discuss impact and alternatives

**Tests fail after changes:**
- Fix tests before marking todo complete
- Ensure all related tests pass

**Referenced code doesn't exist:**
- Ask user for clarification
- Verify understanding before proceeding

## Important Guidelines

- **Always use TodoWrite** for tracking progress
- **Mark todos completed immediately** after each item
- **Only one todo in_progress** at any time
- **Don't batch completions** - update status in real-time
- **Ask questions** for unclear feedback
- **Run tests** if changes affect tested code
- **Follow CLAUDE.md conventions** for all code changes
- **Use conventional commits** if creating commits afterward

## Example

User: "Implement these review comments:
1. Add type hints to the extract function
2. Fix the duplicate tag detection logic
3. Update the docstring in chain.py"

**Actions:**
1. Create TodoWrite with 3 items
2. Mark item 1 as in_progress
3. Grep for extract function
4. Read file containing function
5. Edit to add type hints
6. Mark item 1 completed
7. Mark item 2 in_progress
8. Repeat process for remaining items
9. Summarize all changes made

Overview

This skill processes and implements code review feedback systematically for TypeScript projects. It converts reviewer comments into tracked todos, applies changes in small, verifiable steps, and keeps you informed throughout the process. It is optimized for the Unified Source-of-Truth monorepo and follows project conventions and test-driven validation.

How this skill works

The skill parses pasted review notes into discrete, actionable items and creates a TodoWrite list. For each item it locates relevant files, edits code or documentation, runs linting and tests, and marks tasks completed one at a time. Ambiguities or conflicts are flagged for clarification before implementation and a final summary of changes is provided.

When to use it

  • You receive PR review comments to implement
  • You have a list of reviewer notes or pasted feedback
  • You want feedback translated into tracked, small tasks
  • You need changes applied with tests and linting validated
  • You must follow project conventions and communicate progress

Best practices

  • Break reviewer suggestions into small, measurable todos
  • Start only one todo as in_progress at a time and mark completed immediately after finishing
  • Ask clarifying questions before changing ambiguous code
  • Run affected tests and lint checks before marking work done
  • Follow repository conventions and use conventional commits when composing commits

Example use cases

  • User pastes numbered PR review comments asking for type annotations and tests
  • Reviewer requests refactor of duplicate detection logic and new unit tests
  • Docstring or markdown updates requested across several files
  • Large review with mixed requests: code changes, tests, and docs—convert to distinct todos and implement sequentially
  • Conflicting reviewer requests—identify conflict and ask user to choose a preferred approach

FAQ

What if a reviewer references code that doesn’t exist?

I will flag the item, ask you to confirm the correct target or provide the missing file, and pause that todo until clarified.

Do you run tests and linters automatically?

Yes—after edits I run relevant tests and lint checks and only mark the todo complete if checks pass; failing tests are fixed before completion.

How are complex changes handled?

I break complex feedback into smaller subtasks, create separate todos for each step, and implement them sequentially so each change is verifiable.