home / skills / secondsky / claude-skills / verification-before-completion

This skill enforces verification before completion by running tests, builds, and lint checks, then reporting exact outcomes to ensure honest releases.

npx playbooks add skill secondsky/claude-skills --skill verification-before-completion

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
4.4 KB
---
name: verification-before-completion
description: Run verification commands and confirm output before claiming success. Use when about to claim work is complete, fixed, or passing, before committing or creating PRs.
version: 1.1.0
---

# Verification Before Completion

## Overview

Claiming work is complete without verification is dishonesty, not efficiency.

**Core principle:** Evidence before claims, always.

**Violating the letter of this rule is violating the spirit of this rule.**

## The Iron Law

```
NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE
```

If you haven't run the verification command in this message, you cannot claim it passes.

## The Gate Function

```
BEFORE claiming any status or expressing satisfaction:

1. IDENTIFY: What command proves this claim?
2. RUN: Execute the FULL command (fresh, complete)
3. READ: Full output, check exit code, count failures
4. VERIFY: Does output confirm the claim?
   - If NO: State actual status with evidence
   - If YES: State claim WITH evidence
5. ONLY THEN: Make the claim

Skip any step = lying, not verifying
```

## Common Verification Commands

**Tests:**
```bash
# Preferred: bun
bun test

# Alternative: npm
npm test
```

**Build:**
```bash
# Preferred: bun
bun run build

# Alternative: npm
npm run build
```

**Lint:**
```bash
# Preferred: bun
bun run lint

# Alternative: npm
npm run lint
```

**Type check:**
```bash
# Preferred: bun
bunx tsc --noEmit

# or: npx tsc --noEmit
```

## Common Failures

| Claim | Requires | Not Sufficient |
|-------|----------|----------------|
| Tests pass | Test command output: 0 failures | Previous run, "should pass" |
| Linter clean | Linter output: 0 errors | Partial check, extrapolation |
| Build succeeds | Build command: exit 0 | Linter passing, logs look good |
| Bug fixed | Test original symptom: passes | Code changed, assumed fixed |
| Regression test works | Red-green cycle verified | Test passes once |
| Agent completed | VCS diff shows changes | Agent reports "success" |
| Requirements met | Line-by-line checklist | Tests passing |

## Red Flags - STOP

- Using "should", "probably", "seems to"
- Expressing satisfaction before verification ("Great!", "Perfect!", "Done!", etc.)
- About to commit/push/PR without verification
- Trusting agent success reports
- Relying on partial verification
- Thinking "just this once"
- Tired and wanting work over
- **ANY wording implying success without having run verification**

## Rationalization Prevention

| Excuse | Reality |
|--------|---------|
| "Should work now" | RUN the verification |
| "I'm confident" | Confidence ≠ evidence |
| "Just this once" | No exceptions |
| "Linter passed" | Linter ≠ compiler |
| "Agent said success" | Verify independently |
| "I'm tired" | Exhaustion ≠ excuse |
| "Partial check is enough" | Partial proves nothing |
| "Different words so rule doesn't apply" | Spirit over letter |

## Key Patterns

**Tests:**
```
✅ [Run test command] [See: 34/34 pass] "All tests pass"
❌ "Should pass now" / "Looks correct"
```

**Regression tests (TDD Red-Green):**
```
✅ Write → Run (pass) → Revert fix → Run (MUST FAIL) → Restore → Run (pass)
❌ "I've written a regression test" (without red-green verification)
```

**Build:**
```
✅ [Run build] [See: exit 0] "Build passes"
❌ "Linter passed" (linter doesn't check compilation)
```

**Requirements:**
```
✅ Re-read plan → Create checklist → Verify each → Report gaps or completion
❌ "Tests pass, phase complete"
```

**Agent delegation:**
```
✅ Agent reports success → Check VCS diff → Verify changes → Report actual state
❌ Trust agent report
```

## Why This Matters

From failure memories:
- False claims broke trust
- Undefined functions shipped - would crash
- Missing requirements shipped - incomplete features
- Time wasted on false completion → redirect → rework
- Violates: "Honesty is a core value. If you lie, you'll be replaced."

## When To Apply

**ALWAYS before:**
- ANY variation of success/completion claims
- ANY expression of satisfaction
- ANY positive statement about work state
- Committing, PR creation, task completion
- Moving to next task
- Delegating to agents

**Rule applies to:**
- Exact phrases
- Paraphrases and synonyms
- Implications of success
- ANY communication suggesting completion/correctness

## The Bottom Line

**No shortcuts for verification.**

Run the command. Read the output. THEN claim the result.

This is non-negotiable.

Overview

This skill enforces running verification commands and presenting fresh evidence before claiming work is complete, fixed, or passing. It prevents premature success statements by requiring explicit commands, full output review, and an exit-code or failure-count check. Use it to ensure honesty and reproducible state before commits, PRs, or handoffs.

How this skill works

The skill asks you to identify the exact command that proves your claim, executes it fully, and captures stdout/stderr and the exit code. It then requires you to read and interpret the output: count failures, confirm exit 0 where applicable, and attach the evidence. Only after those steps are satisfied does it permit any statement that work is complete or passing.

When to use it

  • Before committing code or opening a pull request
  • Before declaring tests, build, lint, or type checks passed
  • Before marking bugs as fixed or moving to the next task
  • Before delegating work to agents or teammates
  • Whenever you write a status update that implies success

Best practices

  • Always run the full, canonical verification command (don’t rely on cached or partial runs)
  • Paste or attach full command output and include exit code or failure counts
  • Prefer the project’s preferred runner (e.g., bun) but include alternatives if unavailable
  • Follow red-green verification for regression tests (confirm failing state then passing state)
  • Avoid qualitative words like “should”, “probably”, or “seems”; report objective evidence

Example use cases

  • Running `bun test` and pasting the final summary (e.g., 34/34 passed) before writing “All tests pass”
  • Executing `bun run build` and showing exit 0 and build artifact timestamps before creating a release PR
  • Running type check (`bunx tsc --noEmit`) and including compiler output prior to merging
  • Doing a linter pass and including the linter’s zero-error output before closing lint-related issues
  • When an agent reports success, show the VCS diff and re-run failing tests locally to confirm

FAQ

What counts as sufficient evidence?

Sufficient evidence is the full, fresh command output plus an explicit success indicator (exit code 0, 0 failures, or the expected artifact). Partial logs or stale results are not sufficient.

Can I rely on an agent’s internal success message?

No. Always independently run the verification command and inspect the VCS diff or artifacts. Agent messages are hypotheses, not proof.