home / skills / yousufjoyian / claude-skills / verification-before-completion

verification-before-completion skill

/debugging/verification-before-completion

This skill enforces verification before completion by running commands, analyzing outputs, and only claiming success with evidence.

This is most likely a fork of the verification-before-completion skill from mamba-mental
npx playbooks add skill yousufjoyian/claude-skills --skill verification-before-completion

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
4.1 KB
---
name: verification-before-completion
description: Run verification commands and confirm output before claiming success
---

# Verification Before Completion

## Overview

Claiming work is complete without verification is dishonesty, not efficiency.

**Core principle:** Evidence before claims, always.

**Violating the letter of this rule is violating the spirit of this rule.**

## The Iron Law

```
NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE
```

If you haven't run the verification command in this message, you cannot claim it passes.

## The Gate Function

```
BEFORE claiming any status or expressing satisfaction:

1. IDENTIFY: What command proves this claim?
2. RUN: Execute the FULL command (fresh, complete)
3. READ: Full output, check exit code, count failures
4. VERIFY: Does output confirm the claim?
   - If NO: State actual status with evidence
   - If YES: State claim WITH evidence
5. ONLY THEN: Make the claim

Skip any step = lying, not verifying
```

## Common Failures

| Claim | Requires | Not Sufficient |
|-------|----------|----------------|
| Tests pass | Test command output: 0 failures | Previous run, "should pass" |
| Linter clean | Linter output: 0 errors | Partial check, extrapolation |
| Build succeeds | Build command: exit 0 | Linter passing, logs look good |
| Bug fixed | Test original symptom: passes | Code changed, assumed fixed |
| Regression test works | Red-green cycle verified | Test passes once |
| Agent completed | VCS diff shows changes | Agent reports "success" |
| Requirements met | Line-by-line checklist | Tests passing |

## Red Flags - STOP

- Using "should", "probably", "seems to"
- Expressing satisfaction before verification ("Great!", "Perfect!", "Done!", etc.)
- About to commit/push/PR without verification
- Trusting agent success reports
- Relying on partial verification
- Thinking "just this once"
- Tired and wanting work over
- **ANY wording implying success without having run verification**

## Rationalization Prevention

| Excuse | Reality |
|--------|---------|
| "Should work now" | RUN the verification |
| "I'm confident" | Confidence ≠ evidence |
| "Just this once" | No exceptions |
| "Linter passed" | Linter ≠ compiler |
| "Agent said success" | Verify independently |
| "I'm tired" | Exhaustion ≠ excuse |
| "Partial check is enough" | Partial proves nothing |
| "Different words so rule doesn't apply" | Spirit over letter |

## Key Patterns

**Tests:**
```
✅ [Run test command] [See: 34/34 pass] "All tests pass"
❌ "Should pass now" / "Looks correct"
```

**Regression tests (TDD Red-Green):**
```
✅ Write → Run (pass) → Revert fix → Run (MUST FAIL) → Restore → Run (pass)
❌ "I've written a regression test" (without red-green verification)
```

**Build:**
```
✅ [Run build] [See: exit 0] "Build passes"
❌ "Linter passed" (linter doesn't check compilation)
```

**Requirements:**
```
✅ Re-read plan → Create checklist → Verify each → Report gaps or completion
❌ "Tests pass, phase complete"
```

**Agent delegation:**
```
✅ Agent reports success → Check VCS diff → Verify changes → Report actual state
❌ Trust agent report
```

## Why This Matters

From 24 failure memories:
- your human partner said "I don't believe you" - trust broken
- Undefined functions shipped - would crash
- Missing requirements shipped - incomplete features
- Time wasted on false completion → redirect → rework
- Violates: "Honesty is a core value. If you lie, you'll be replaced."

## When To Apply

**ALWAYS before:**
- ANY variation of success/completion claims
- ANY expression of satisfaction
- ANY positive statement about work state
- Committing, PR creation, task completion
- Moving to next task
- Delegating to agents

**Rule applies to:**
- Exact phrases
- Paraphrases and synonyms
- Implications of success
- ANY communication suggesting completion/correctness

## The Bottom Line

**No shortcuts for verification.**

Run the command. Read the output. THEN claim the result.

This is non-negotiable.

Overview

This skill enforces running verification commands and presenting fresh evidence before claiming work is complete. It codifies a strict gate: identify the proving command, run it, read full output and exit code, and only then state success with supporting evidence. The skill prevents assumptions, partial checks, and agent-only reports from being treated as verification.

How this skill works

The skill inspects messages that claim completion, satisfaction, or success and requires an explicit verification sequence before any positive assertion. It prompts or blocks until you identify the exact command, run it, capture full stdout/stderr and exit code, and attach that output as evidence. If the verification fails, the skill forces an evidence-backed status report instead of an unwarranted claim.

When to use it

  • Before claiming tests, builds, linters, or bug fixes pass
  • Before committing, pushing, or opening a pull request
  • Before moving to the next task or marking work done
  • Whenever an agent or tool reports success without independent proof
  • When expressing satisfaction or finality about a change

Best practices

  • Always run the full verification command in the current message context — no referencing past runs
  • Include complete command output and exit code when asserting success
  • Perform red-green cycles for regression tests (verify fail, then verify pass)
  • Avoid qualitative language like “should” or “probably” in status messages
  • Treat agent success messages as prompts to run independent verification

Example use cases

  • CI test run: execute test suite, paste the 0-failures output before claiming tests pass
  • Bug fix verification: reproduce original failing test, show it fails, apply fix, rerun and show it passes
  • Build verification: run build command, include build exit code and logs when declaring a successful build
  • PR readiness: run full verification checklist and attach outputs before marking the PR ready for review
  • Agent delegation: run VCS diff and relevant commands locally and present outputs before accepting an agent’s claim of completion

FAQ

What counts as acceptable evidence?

Complete command text, full stdout/stderr, and the exit code or explicit success line from the tool. Partial snippets or summaries are not sufficient.

Can I rely on a previous verification run?

No. Evidence must be fresh in the current message context unless you reproduce the exact command and outputs now.

How do I handle flaky tests?

Document the flaky behavior, show multiple runs (fail/pass) and include timestamps and environment details; avoid claiming stability without repeated, consistent evidence.