home / skills / abdullahbeam / nexus-design-abdullah / validate-skill-functionality

validate-skill-functionality skill

/00-system/skills/skill-dev/validate-skill-functionality

This skill performs post-execution validation of a completed skill, documents findings, and flags issues to ensure reliable behavior and traceability.

npx playbooks add skill abdullahbeam/nexus-design-abdullah --skill validate-skill-functionality

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
3.1 KB
---
name: validate-skill-functionality
description: Load when user says "validate skill", "validate this skill", "review skill execution", "check skill", or "skill validation" at the end of executing a skill. Post-execution review workflow for validating that a skill worked correctly, documenting findings, and identifying issues.
---

# Validate Skill Functionality

**Purpose**: Systematic post-execution review to validate skill functionality and document findings.

**When to Use**: After a skill has completed its full execution workflow

## Workflow

Follow these steps to validate skill functionality:

### Step 1: Review Execution Context

- Identify which skill was just executed
- Review what the skill was supposed to accomplish
- Check the SKILL.md to understand expected behavior
- Review conversation history to identify all tool calls made during execution

### Step 2: Validate File Loading

**Check that all required files were loaded correctly:**

- Review all Read tool calls in the conversation
- Verify SKILL.md was loaded (for skill execution context)
- Check if skill references other files (references/, scripts/, assets/)
- Confirm referenced files were actually loaded when needed
- Look for "File not found" errors or truncated reads
- Verify file paths match expected locations

**Example checks:**
```
✅ SKILL.md loaded: Yes (line 1-88, complete)
✅ references/workflow.md loaded: Yes (when needed in Step 2)
❌ references/error-handling.md loaded: No (should have been loaded but wasn't)
✅ scripts/bulk-complete.py executed: Yes (correct parameters)
```

### Step 3: Validate Skill Nesting/Wrapping

**Check if skills correctly loaded nested skills:**

- Identify if the skill called other skills (e.g., execute-project calls create-skill)
- Verify nested skills were loaded using nexus-loader.py or explicit Read
- Confirm nested skill workflows were followed correctly
- Check that context was passed properly between skills
- Validate that nested skill outputs fed back correctly

**Example checks:**
```
Primary Skill: execute-project
  ✅ Loaded: Yes (via nexus-loader.py --skill execute-project)

  Nested Skill: create-skill
    ✅ Loaded: Yes (via nexus-loader.py --skill create-skill)
    ✅ SKILL.md read: Yes (complete)
    ✅ Workflow followed: Yes (all 7 steps)
    ✅ Context passed: Yes (user's workflow → create-skill)

  Nested Skill: close-session
    ✅ Loaded: Yes (auto-triggered)
    ✅ workflow.md loaded: Yes (as required)
    ✅ All 8 steps executed: Yes
```

### Step 4: Verify Expected Outputs

- Confirm the skill completed its workflow
- Check that outputs match expectations
- Verify all steps executed correctly
- Validate files were created/modified as expected

### Step 5: Check for Errors or Edge Cases

- Look for any errors or warnings during execution
- Identify edge cases or unexpected behavior
- Note any deviations from expected workflow
- Check for incomplete reads or missing context

### Step 6: Report Findings (≤5 lines)

Report to user verbally:
- ✅ What worked
- ❌ Issues found (if any)
- 💡 Recommendations (if any)

**NO documentation files** - Follow orchestrator.md ≤5 line rule

Overview

This skill provides a concise post-execution review to validate that a recently run skill completed correctly, document findings, and surface issues. It runs after a skill finishes and produces a short, actionable summary for the user. The outcome is a brief verbal report indicating successes, problems, and recommended next steps.

How this skill works

After a skill completes, the validator inspects the execution context, conversation history, tool calls, and any files the skill referenced or produced. It verifies file loads, nested skill invocations, output artifacts, and checks for errors or truncated reads. The validator then synthesizes a short report (five lines or fewer) listing what worked, any issues, and recommendations.

When to use it

  • Immediately after a skill finishes full execution
  • When a skill calls external files, scripts, or other nested skills
  • Before handing results back to the user or triggering follow-up workflows
  • When you suspect missing context, truncated reads, or partial outputs

Best practices

  • Confirm all referenced documentation and files were loaded when needed
  • Verify nested skills were invoked and returned expected context
  • Check produced files and outputs against expected artifacts and formats
  • Scan logs and conversation history for errors, warnings, or truncated reads
  • Keep the final report to five lines: what worked, issues, and a short recommendation

Example use cases

  • Validate a skill that generated reports and uploaded result files to ensure all files were created
  • Review a multi-skill workflow where one skill invoked others and passed context between them
  • Confirm a script executed with correct parameters and produced expected output files
  • Detect a missing file load or a truncated read that caused incomplete results
  • Provide a rapid pass/fail and remediation suggestion before delivering results to a user

FAQ

How long should the validator report be?

Keep it to five lines or fewer: one line for successes, one for issues, and up to three lines for concise recommendations.

What if a nested skill failed silently?

Flag the nested invocation as failed or incomplete, note missing outputs or context, and recommend rerunning the nested skill with full logging enabled.