home / skills / jackspace / claudeskillz / verification-before-completion_obra

verification-before-completion_obra skill

/skills/verification-before-completion_obra

This skill helps you verify each completion claim by running and validating verification commands before announcing success.

This is most likely a fork of the verification-before-completion skill from obra
npx playbooks add skill jackspace/claudeskillz --skill verification-before-completion_obra

Review the files below or copy the command above to add this skill to your agents.

Files (3)
SKILL.md
4.1 KB
---
name: verification-before-completion
description: Use when about to claim work is complete, fixed, or passing, before committing or creating PRs - requires running verification commands and confirming output before making any success claims; evidence before assertions always
---

# Verification Before Completion

## Overview

Claiming work is complete without verification is dishonesty, not efficiency.

**Core principle:** Evidence before claims, always.

**Violating the letter of this rule is violating the spirit of this rule.**

## The Iron Law

```
NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE
```

If you haven't run the verification command in this message, you cannot claim it passes.

## The Gate Function

```
BEFORE claiming any status or expressing satisfaction:

1. IDENTIFY: What command proves this claim?
2. RUN: Execute the FULL command (fresh, complete)
3. READ: Full output, check exit code, count failures
4. VERIFY: Does output confirm the claim?
   - If NO: State actual status with evidence
   - If YES: State claim WITH evidence
5. ONLY THEN: Make the claim

Skip any step = lying, not verifying
```

## Common Failures

| Claim | Requires | Not Sufficient |
|-------|----------|----------------|
| Tests pass | Test command output: 0 failures | Previous run, "should pass" |
| Linter clean | Linter output: 0 errors | Partial check, extrapolation |
| Build succeeds | Build command: exit 0 | Linter passing, logs look good |
| Bug fixed | Test original symptom: passes | Code changed, assumed fixed |
| Regression test works | Red-green cycle verified | Test passes once |
| Agent completed | VCS diff shows changes | Agent reports "success" |
| Requirements met | Line-by-line checklist | Tests passing |

## Red Flags - STOP

- Using "should", "probably", "seems to"
- Expressing satisfaction before verification ("Great!", "Perfect!", "Done!", etc.)
- About to commit/push/PR without verification
- Trusting agent success reports
- Relying on partial verification
- Thinking "just this once"
- Tired and wanting work over
- **ANY wording implying success without having run verification**

## Rationalization Prevention

| Excuse | Reality |
|--------|---------|
| "Should work now" | RUN the verification |
| "I'm confident" | Confidence ≠ evidence |
| "Just this once" | No exceptions |
| "Linter passed" | Linter ≠ compiler |
| "Agent said success" | Verify independently |
| "I'm tired" | Exhaustion ≠ excuse |
| "Partial check is enough" | Partial proves nothing |
| "Different words so rule doesn't apply" | Spirit over letter |

## Key Patterns

**Tests:**
```
✅ [Run test command] [See: 34/34 pass] "All tests pass"
❌ "Should pass now" / "Looks correct"
```

**Regression tests (TDD Red-Green):**
```
✅ Write → Run (pass) → Revert fix → Run (MUST FAIL) → Restore → Run (pass)
❌ "I've written a regression test" (without red-green verification)
```

**Build:**
```
✅ [Run build] [See: exit 0] "Build passes"
❌ "Linter passed" (linter doesn't check compilation)
```

**Requirements:**
```
✅ Re-read plan → Create checklist → Verify each → Report gaps or completion
❌ "Tests pass, phase complete"
```

**Agent delegation:**
```
✅ Agent reports success → Check VCS diff → Verify changes → Report actual state
❌ Trust agent report
```

## Why This Matters

From 24 failure memories:
- your human partner said "I don't believe you" - trust broken
- Undefined functions shipped - would crash
- Missing requirements shipped - incomplete features
- Time wasted on false completion → redirect → rework
- Violates: "Honesty is a core value. If you lie, you'll be replaced."

## When To Apply

**ALWAYS before:**
- ANY variation of success/completion claims
- ANY expression of satisfaction
- ANY positive statement about work state
- Committing, PR creation, task completion
- Moving to next task
- Delegating to agents

**Rule applies to:**
- Exact phrases
- Paraphrases and synonyms
- Implications of success
- ANY communication suggesting completion/correctness

## The Bottom Line

**No shortcuts for verification.**

Run the command. Read the output. THEN claim the result.

This is non-negotiable.

Overview

This skill enforces a strict pre-commit verification habit: never claim work is complete without running the exact verification command and presenting its output. It treats evidence as mandatory and prevents premature commits, PRs, or status statements that are unsupported by fresh verification. The goal is to eliminate false positives, wasted rework, and broken trust caused by unverified claims.

How this skill works

Before any claim of success, the skill requires you to identify the definitive verification command, run it in full, inspect the exit code and complete output, and only then state the result with evidence. If verification fails, you must report the actual state with logs or test output rather than asserting completion. The skill functions as a mental gate or checklist to stop commits, PRs, or satisfied statements until verification is produced.

When to use it

  • Before committing code, pushing branches, or opening a pull request
  • Before declaring tests, linters, or builds as passing
  • Before marking a bug as fixed or a task as done
  • Before moving to the next task or delegating results to another agent
  • When an agent or tool reports success without independent verification

Best practices

  • Always run the full verification command fresh in the current environment; don’t reuse stale output
  • Capture and share the full output and exit code as evidence when claiming success
  • Perform red-green regression checks for regressions: ensure failing state then passing state
  • Avoid ambiguous language like “probably”, “should”, or “looks good” before verification
  • Treat agent success messages as hypotheses to be independently verified

Example use cases

  • Run the full test suite and paste the 0-failures output before closing a ticket
  • Execute the build command to completion and include exit 0 before merging a release branch
  • Reproduce the original failing test, confirm it fails, apply the fix, then rerun and show passing results
  • Verify linter and typechecker outputs separately and include both outputs before claiming code quality
  • Check VCS diffs and run verification after an automated agent applies changes before accepting them

FAQ

What counts as sufficient evidence?

The full command output plus the exit code or explicit pass/fail summary produced by the verification tool run in the current environment.

Can I rely on a previous run?

No — evidence must be fresh and reproducible in the present context; prior runs are insufficient.