home / skills / sickn33 / antigravity-awesome-skills / verification-before-completion

verification-before-completion skill

/skills/verification-before-completion

This skill enforces verification before completion by guiding you to run tests, read outputs, and report evidence-based results.

This is most likely a fork of the verification-before-completion skill from obra
npx playbooks add skill sickn33/antigravity-awesome-skills --skill verification-before-completion

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
4.1 KB
---
name: verification-before-completion
description: Use when about to claim work is complete, fixed, or passing, before committing or creating PRs - requires running verification commands and confirming output before making any success claims; evidence before assertions always
---

# Verification Before Completion

## Overview

Claiming work is complete without verification is dishonesty, not efficiency.

**Core principle:** Evidence before claims, always.

**Violating the letter of this rule is violating the spirit of this rule.**

## The Iron Law

```
NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE
```

If you haven't run the verification command in this message, you cannot claim it passes.

## The Gate Function

```
BEFORE claiming any status or expressing satisfaction:

1. IDENTIFY: What command proves this claim?
2. RUN: Execute the FULL command (fresh, complete)
3. READ: Full output, check exit code, count failures
4. VERIFY: Does output confirm the claim?
   - If NO: State actual status with evidence
   - If YES: State claim WITH evidence
5. ONLY THEN: Make the claim

Skip any step = lying, not verifying
```

## Common Failures

| Claim | Requires | Not Sufficient |
|-------|----------|----------------|
| Tests pass | Test command output: 0 failures | Previous run, "should pass" |
| Linter clean | Linter output: 0 errors | Partial check, extrapolation |
| Build succeeds | Build command: exit 0 | Linter passing, logs look good |
| Bug fixed | Test original symptom: passes | Code changed, assumed fixed |
| Regression test works | Red-green cycle verified | Test passes once |
| Agent completed | VCS diff shows changes | Agent reports "success" |
| Requirements met | Line-by-line checklist | Tests passing |

## Red Flags - STOP

- Using "should", "probably", "seems to"
- Expressing satisfaction before verification ("Great!", "Perfect!", "Done!", etc.)
- About to commit/push/PR without verification
- Trusting agent success reports
- Relying on partial verification
- Thinking "just this once"
- Tired and wanting work over
- **ANY wording implying success without having run verification**

## Rationalization Prevention

| Excuse | Reality |
|--------|---------|
| "Should work now" | RUN the verification |
| "I'm confident" | Confidence ≠ evidence |
| "Just this once" | No exceptions |
| "Linter passed" | Linter ≠ compiler |
| "Agent said success" | Verify independently |
| "I'm tired" | Exhaustion ≠ excuse |
| "Partial check is enough" | Partial proves nothing |
| "Different words so rule doesn't apply" | Spirit over letter |

## Key Patterns

**Tests:**
```
✅ [Run test command] [See: 34/34 pass] "All tests pass"
❌ "Should pass now" / "Looks correct"
```

**Regression tests (TDD Red-Green):**
```
✅ Write → Run (pass) → Revert fix → Run (MUST FAIL) → Restore → Run (pass)
❌ "I've written a regression test" (without red-green verification)
```

**Build:**
```
✅ [Run build] [See: exit 0] "Build passes"
❌ "Linter passed" (linter doesn't check compilation)
```

**Requirements:**
```
✅ Re-read plan → Create checklist → Verify each → Report gaps or completion
❌ "Tests pass, phase complete"
```

**Agent delegation:**
```
✅ Agent reports success → Check VCS diff → Verify changes → Report actual state
❌ Trust agent report
```

## Why This Matters

From 24 failure memories:
- your human partner said "I don't believe you" - trust broken
- Undefined functions shipped - would crash
- Missing requirements shipped - incomplete features
- Time wasted on false completion → redirect → rework
- Violates: "Honesty is a core value. If you lie, you'll be replaced."

## When To Apply

**ALWAYS before:**
- ANY variation of success/completion claims
- ANY expression of satisfaction
- ANY positive statement about work state
- Committing, PR creation, task completion
- Moving to next task
- Delegating to agents

**Rule applies to:**
- Exact phrases
- Paraphrases and synonyms
- Implications of success
- ANY communication suggesting completion/correctness

## The Bottom Line

**No shortcuts for verification.**

Run the command. Read the output. THEN claim the result.

This is non-negotiable.

Overview

This skill enforces a strict verification-before-completion discipline: never claim work is done without fresh, observable evidence. It prevents premature commits, PRs, or status updates by requiring the exact verification command, its full output, and exit status before any success statement. The goal is honest, reproducible assertions backed by verifiable evidence.

How this skill works

Before declaring success, identify the single command that proves the claim (tests, build, linter, regression check). Run the command fully, read the complete output and exit code, and compare results to the claim. If output confirms the claim, include that evidence when reporting; if not, report the actual status with the output. Skip any step and you must not state completion or satisfaction.

When to use it

  • Right before committing code, creating a PR, or pushing changes
  • Before marking a task, ticket, or story as done
  • Prior to claiming bugs are fixed or regressions resolved
  • When delegating work to agents or reporting agent results
  • Before any positive statement about tests, builds, linting, or requirements

Best practices

  • Always run the canonical verification command for the claimed outcome (e.g., npm test, pytest, build script)
  • Capture and paste the full output and exit code when asserting success
  • Perform red-green regression cycles for bug fixes (prove failure then success)
  • Avoid qualifiers like "should", "probably", or "seems"—use evidence
  • Automate verification in CI and locally mirror the same commands

Example use cases

  • Run the test suite locally and include the '34/34 passed' output before creating a PR
  • Execute the build command and show exit 0 output before merging release branches
  • After an agent edits code, run VCS diff + tests and attach both outputs before trusting the agent
  • For a reported bug, run the original failing test, show it fails, apply fix, rerun and show it passes
  • When claiming requirements are met, run a checklist script and paste the verification report

FAQ

What if verification is flaky or non-deterministic?

Document flakes, run multiple iterations, stabilize tests, or add environment-specific guards; do not claim success until reproducible evidence exists.

Can I rely on a prior run from another machine or time?

No. Evidence must be fresh and reproducible in the current context where the claim is made.