home / skills / oimiragieo / agent-studio / verification-before-completion
This skill enforces verification before declaring completion by executing fresh checks and presenting evidence before any status claim.
npx playbooks add skill oimiragieo/agent-studio --skill verification-before-completionReview the files below or copy the command above to add this skill to your agents.
---
name: verification-before-completion
description: Gate function preventing unverified completion claims. Use before claiming any task is done.
version: 1.0
model: sonnet
invoked_by: both
user_invocable: true
tools: [Read, Bash]
best_practices:
- Run verification commands before claiming success
- Read full command output
- Never claim completion without evidence
error_handling: strict
streaming: supported
---
# Verification Before Completion
## Overview
Claiming work is complete without verification is dishonesty, not efficiency.
**Core principle:** Evidence before claims, always.
**Violating the letter of this rule is violating the spirit of this rule.**
## The Iron Law
```
NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE
```
If you haven't run the verification command in this message, you cannot claim it passes.
## The Gate Function
```
BEFORE claiming any status or expressing satisfaction:
1. IDENTIFY: What command proves this claim?
2. RUN: Execute the FULL command (fresh, complete)
3. READ: Full output, check exit code, count failures
4. VERIFY: Does output confirm the claim?
- If NO: State actual status with evidence
- If YES: State claim WITH evidence
5. ONLY THEN: Make the claim
Skip any step = lying, not verifying
```
## Common Failures
| Claim | Requires | Not Sufficient |
| --------------------- | -------------------------------------- | ------------------------------ |
| Tests pass | Test command output: 0 failures | Previous run, "should pass" |
| Linter clean | `pnpm lint:fix` output: 0 errors | Partial check, extrapolation |
| Format clean | `pnpm format` output: no changes | Visual inspection, assumption |
| Build succeeds | Build command: exit 0 | Linter passing, logs look good |
| Bug fixed | Test original symptom: passes | Code changed, assumed fixed |
| Regression test works | Red-green cycle verified | Test passes once |
| Agent completed | VCS diff shows changes | Agent reports "success" |
| Requirements met | Line-by-line checklist | Tests passing |
| Code quality gates | `pnpm lint:fix` + `pnpm format` passed | Tests passing |
## Red Flags - STOP
- Using "should", "probably", "seems to"
- Expressing satisfaction before verification ("Great!", "Perfect!", "Done!", etc.)
- About to commit/push/PR without verification
- Trusting agent success reports
- Relying on partial verification
- Thinking "just this once"
- Tired and wanting work over
- **ANY wording implying success without having run verification**
## Rationalization Prevention
| Excuse | Reality |
| --------------------------------------- | ---------------------------------- |
| "Should work now" | RUN the verification |
| "I'm confident" | Confidence does not equal evidence |
| "Just this once" | No exceptions |
| "Linter passed" | Linter does not equal compiler |
| "Agent said success" | Verify independently |
| "I'm tired" | Exhaustion does not equal excuse |
| "Partial check is enough" | Partial proves nothing |
| "Different words so rule doesn't apply" | Spirit over letter |
## Key Patterns
**Tests:**
```
CORRECT: [Run test command] [See: 34/34 pass] "All tests pass"
WRONG: "Should pass now" / "Looks correct"
```
**Regression tests (TDD Red-Green):**
```
CORRECT: Write -> Run (pass) -> Revert fix -> Run (MUST FAIL) -> Restore -> Run (pass)
WRONG: "I've written a regression test" (without red-green verification)
```
**Build:**
```
CORRECT: [Run build] [See: exit 0] "Build passes"
WRONG: "Linter passed" (linter doesn't check compilation)
```
**Lint and Format (BLOCKING GATE):**
```
CORRECT: [Run pnpm lint:fix] [See: 0 errors] [Run pnpm format] [See: no changes] "Lint and format clean"
WRONG: "Code looks formatted" / "No obvious lint issues" / "Should be clean"
```
**Requirements:**
```
CORRECT: Re-read plan -> Create checklist -> Verify each -> Report gaps or completion
WRONG: "Tests pass, phase complete"
```
**Agent delegation:**
```
CORRECT: Agent reports success -> Check VCS diff -> Verify changes -> Report actual state
WRONG: Trust agent report
```
## Why This Matters
From failure analysis:
- "I don't believe you" - trust broken
- Undefined functions shipped - would crash
- Missing requirements shipped - incomplete features
- Time wasted on false completion -> redirect -> rework
- Violates: "Honesty is a core value. If you lie, you'll be replaced."
## When To Apply
**ALWAYS before:**
- ANY variation of success/completion claims
- ANY expression of satisfaction
- ANY positive statement about work state
- Committing, PR creation, task completion
- Moving to next task
- Delegating to agents
**Rule applies to:**
- Exact phrases
- Paraphrases and synonyms
- Implications of success
- ANY communication suggesting completion/correctness
## The Bottom Line
**No shortcuts for verification.**
Run the command. Read the output. THEN claim the result.
This is non-negotiable.
## Memory Protocol (MANDATORY)
**Before starting:**
Read `.claude/context/memory/learnings.md`
**After completing:**
- New pattern -> `.claude/context/memory/learnings.md`
- Issue found -> `.claude/context/memory/issues.md`
- Decision made -> `.claude/context/memory/decisions.md`
> ASSUME INTERRUPTION: If it's not in memory, it didn't happen.
This skill enforces a strict verification gate before claiming any task is complete. It prevents premature success statements by requiring fresh, concrete evidence (command output, exit codes, diffs) before any positive claim. Use it to protect trust, reduce rework, and ensure honesty in status reporting.
Before any completion claim, the skill walks through a five-step gate: identify the exact verification command, run it fully and freshly, read full output and exit code, verify that the output supports the claim, and only then report the result with evidence. If verification fails, the skill requires an honest statement of the current status with supporting evidence instead of a claim of completion.
What counts as valid evidence?
Command output showing zero failures or exit 0, explicit failure counts, and VCS diffs are valid. Screenshots or logs are acceptable if they include the full, unfiltered output and exit status.
Can I rely on prior verification runs?
No. Verification must be fresh and run in the current context. Past runs may not reflect current state.