home / skills / petekp / agent-skills / checkpoint

checkpoint skill

/skills/checkpoint

This skill pauses work to assess context and surface 2-5 actionable next steps with rationale.

npx playbooks add skill petekp/agent-skills --skill checkpoint

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
5.0 KB
---
name: checkpoint
description: |
  Meta-cognitive decision support that analyzes current context and surfaces intelligent next-step options to the user. Use this skill when: (1) User explicitly invokes /checkpoint, (2) Significant work has been completed and a checkpoint is valuable, (3) Uncertainty or ambiguity exists about requirements or approach, (4) Task complexity has expanded beyond initial scope, (5) Before finalizing or committing to ensure nothing is missed. This skill pauses execution, assesses the situation holistically, and presents 2-5 contextually-appropriate options via AskUserQuestion, with a recommended option and rationale.
---

# Checkpoint

Pause, assess, and surface intelligent next-step options to the user.

## When to Trigger Proactively

Suggest this skill (without being asked) when detecting:

- **Completion signal**: A feature, fix, or milestone just finished
- **Uncertainty signal**: Requirements unclear, multiple valid paths, or low confidence in current direction
- **Complexity signal**: Scope expanded, unexpected dependencies emerged, or task is taking longer than expected
- **Drift signal**: Work may have diverged from user's original intent
- **Quality signal**: Code works but may benefit from review, testing, or refactoring

## Workflow

### 1. Context Assessment

Silently evaluate the current state across these dimensions (do not output this analysis):

- **Progress**: What has been accomplished? What remains?
- **Quality**: Is the work solid, or are there rough edges?
- **Alignment**: Does recent work match what the user actually wants?
- **Uncertainty**: What assumptions were made? What's unclear?
- **Risk**: What could go wrong? What hasn't been tested?
- **Efficiency**: Is there a better path forward?

### 2. Generate Options

Based on assessment, generate 2-5 contextually-appropriate options. Draw from (but don't limit to) these archetypes:

| Archetype | When Relevant |
|-----------|---------------|
| **Commit progress** | Meaningful progress made, good stopping point |
| **Systems audit** | Complex changes, potential for bugs or regressions |
| **Prioritize/plan** | Multiple pending tasks, unclear what matters most |
| **Re-evaluate decisions** | Low confidence in recent choices, new information available |
| **Clarify with user** | Assumptions made, requirements ambiguous |
| **Test/verify** | Code works but edge cases untested |
| **Refactor/clean up** | Code functional but messy |
| **Document** | Complex logic that needs explanation |
| **Step back** | May be overcomplicating or missing simpler solution |
| **Continue current path** | Clear next step, no reason to pause |

**Option generation principles:**
- Options should be meaningfully different, not variations of the same thing
- Include at least one "continue forward" option when momentum is valuable
- Include at least one "pause and verify" option when risk is present
- Avoid analysis paralysis - fewer sharp options beat many vague ones

### 3. Select Recommendation

Choose one option as recommended. The recommendation should reflect:

- What would a thoughtful senior engineer do here?
- What reduces risk of wasted effort or rework?
- What serves the user's underlying goals (not just stated requests)?

### 4. Present via AskUserQuestion

Use AskUserQuestion with this structure:

```
Question: "What would you like to do next?"
Header: "Next step" (or contextually appropriate 1-2 words)
Options: [generated options with descriptions]
```

**Option format:**
- `label`: Action verb phrase (e.g., "Commit current progress", "Run systems audit")
- `description`: 1 sentence explaining what this involves and why it might be valuable

**Recommendation:**
- Place recommended option FIRST in the list
- Append "(Recommended)" to its label
- Include rationale in the description

### Example Output

For a scenario where a feature was just implemented but with some shortcuts:

```
AskUserQuestion:
  question: "Feature implementation complete. What would you like to do next?"
  header: "Next step"
  options:
    - label: "Review and refactor (Recommended)"
      description: "Clean up the shortcuts taken during implementation before they become technical debt. The core logic works but could be more maintainable."
    - label: "Add test coverage"
      description: "Write tests for the new feature to catch edge cases and prevent regressions."
    - label: "Commit and move on"
      description: "The feature works - commit it and tackle the next task. Can refactor later if needed."
    - label: "Walk me through what was built"
      description: "Explain the implementation so you can verify it matches your expectations before proceeding."
```

## Anti-Patterns

- **Don't overthink**: This skill should take seconds, not minutes
- **Don't list every possible option**: Curate the most valuable 2-5
- **Don't recommend "ask user" when the situation is clear**: Have a point of view
- **Don't trigger too frequently**: Reserve for genuine decision points, not every minor step
- **Don't explain the assessment process**: Just present the options naturally

Overview

This skill provides meta-cognitive decision support that pauses execution, evaluates the current work context, and surfaces 2–5 clear next-step options to the user. It recommends one option with a short rationale to reduce risk, avoid rework, and keep momentum aligned with the user's goals.

How this skill works

When invoked, the skill silently assesses progress, quality, alignment, uncertainty, risk, and efficiency. It generates a curated set of distinct options (including at least one continue and one verify choice), selects a recommended action a senior engineer would take, and presents the choices via an AskUserQuestion-style prompt with short labels and descriptions.

When to use it

  • After a meaningful milestone, feature, or fix completes and a decision is needed
  • When requirements are ambiguous or multiple valid approaches exist
  • If task scope or complexity increases beyond initial expectations
  • When confidence is low or critical assumptions remain unverified
  • Before finalizing, committing, or shipping to catch omissions early

Best practices

  • Keep the assessment quick and focused—this should take seconds, not minutes
  • Provide 2–5 sharply different options, not many subtle variants
  • Always include at least one forward-motion option and one pause/verify option
  • Make a clear recommendation and explain the risk-reduction rationale
  • Trigger sparingly—reserve for real decision points to avoid interrupting flow

Example use cases

  • A feature build finishes but used a few shortcuts; present options like refactor, add tests, or commit
  • Multiple pending tasks compete for priority; propose a re-prioritize or continue-now choice
  • Unclear requirements discovered mid-implementation; offer to clarify, run a systems audit, or pause
  • Scope expanded with cross-module changes; suggest a systems audit, document changes, or proceed carefully
  • Before final commit: offer test/verify, quick code review, or commit-and-iterate

FAQ

How many options should the skill present?

Present between 2 and 5 curated options—enough choice to be useful but few enough to avoid paralysis.

Should the skill always recommend pausing to verify?

No. The recommendation reflects risk and goals: if momentum is safe and alignment is clear, recommending to continue is valid.