home / skills / max-sixty / worktrunk / worktrunk-review

worktrunk-review skill

/.claude/skills/worktrunk-review

This skill reviews a pull request for idiomatic Rust, project conventions, and code quality to improve maintainability.

npx playbooks add skill max-sixty/worktrunk --skill worktrunk-review

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.1 KB
---
name: worktrunk-review
description: Reviews a pull request for idiomatic Rust, project conventions, and code quality. Use when asked to review a PR or when running as an automated PR reviewer.
argument-hint: "[PR number]"
---

# Worktrunk PR Review

Review a pull request to worktrunk, a Rust CLI tool for managing git worktrees.

**PR to review:** $ARGUMENTS

## Setup

Load these skills first:

1. `/reviewing-code` — systematic review checklist (design review, universal
   principles, completeness)
2. `/developing-rust` — Rust idioms and patterns

Then read CLAUDE.md (project root) to understand project-specific conventions.

## Instructions

1. Read the PR diff with `gh pr diff <number>`.
2. Read the changed files in full (not just the diff) to understand context.
3. Follow the `reviewing-code` skill's structure: design review first, then
   tactical checklist.

## What to review

**Idiomatic Rust and project conventions:**

- Does the code follow Rust idioms? (Iterator chains over manual loops, `?` over
  match-on-error, proper use of Option/Result, etc.)
- Does it follow the project's conventions documented in CLAUDE.md? (Cmd for
  shell commands, error handling with anyhow, accessor naming conventions, etc.)
- Are there unnecessary allocations, clones, or owned types where borrows would
  suffice?

**Code quality:**

- Is the code clear and well-structured?
- Are there simpler ways to express the same logic?
- Does it avoid unnecessary complexity, feature flags, or compatibility layers?

**Correctness:**

- Are there edge cases that aren't handled?
- Could the changes break existing functionality?
- Are error messages helpful and consistent with the project style?

**Testing:**

- Are the changes adequately tested?
- Do the tests follow the project's testing conventions (see tests/CLAUDE.md)?

## How to provide feedback

- Use inline comments for specific code issues.
- Use `gh pr comment` for a top-level summary.
- Be constructive and explain *why* something should change, not just *what*.
- Distinguish between suggestions (nice to have) and issues (should fix).
- Don't nitpick formatting — that's what linters are for.

Overview

This skill reviews a pull request to Worktrunk, a Rust CLI for git worktree management. It checks for idiomatic Rust, adherence to project conventions, and overall code quality. Use it as a manual reviewer or as an automated PR reviewer integrated into CI. The goal is actionable feedback that improves correctness, clarity, and maintainability.

How this skill works

Load the supporting review and Rust-development skills, inspect the PR diff with gh pr diff, and read changed files in full to understand context. Perform a design-first review (architecture, intent, user-facing behavior) followed by a tactical checklist (idioms, allocations, error handling, tests). Provide inline comments for code-level issues and a top-level summary comment with prioritized recommendations.

When to use it

  • When asked to review a Worktrunk PR before merging
  • As an automated CI reviewer for new PRs to catch regressions early
  • When evaluating a refactor that might change public behavior or CLI UX
  • When assessing tests added or modified by a contributor
  • To enforce project conventions from the repository CLAUDE.md

Best practices

  • Read changed files in full — diffs can miss context required for correct feedback
  • Prioritize correctness and clarity over stylistic preferences; mark style as suggestions
  • Explain why changes are needed and suggest minimal, focused fixes
  • Flag unchecked edge cases, excessive allocations, and incorrect error propagation
  • Distinguish between required fixes (bugs, regressions) and optional improvements (performance, readability)

Example use cases

  • Reviewing a PR that introduces a new subcommand for parallel worktree creation
  • Validating a refactor that replaces manual loops with iterator chains
  • Checking a change that alters error messages or uses a different error crate
  • Assessing tests that claim to cover concurrency or filesystem edge cases
  • Running as an automated reviewer to enforce the project's CLI and naming conventions

FAQ

What project conventions should I consult?

Read CLAUDE.md in the repository root for conventions about command naming, error handling, and testing.

How do I mark suggestions versus required changes?

Label issues that break functionality or tests as required; mark readability, micro-optimizations, and stylistic items as suggestions.