home / skills / git-fg / thecattoolkit / adhering-standards

adhering-standards skill

/plugins/sys-core/skills/adhering-standards

This skill helps you implement and enforce standardized decision logic and success criteria for Python projects.

npx playbooks add skill git-fg/thecattoolkit --skill adhering-standards

Review the files below or copy the command above to add this skill to your agents.

Files (23)
SKILL.md
1.3 KB
---
name: {SKILL_NAME}
description: {SKILL_DESCRIPTION}
allowed-tools: {RESTRICTED_TOOLS}
# context: fork  # Optional: use when isolation needed
# agent: {AGENT_NAME}  # Optional: bind to agent persona
---

# {HUMAN_READABLE_NAME}

## 1. Core Knowledge
{Passive knowledge base, key concepts, and terminology}

## 2. Decision Logic / Protocol
{Guidelines for the AI to follow when invoking this skill}

## 3. Success Criteria
{How to determine if the goal was achieved}

## 4. Anti-Patterns
{What to avoid}

Overview

This skill packages a concise, decision-driven knowledge base for task validation and action selection. It combines core concepts, a clear decision protocol, measurable success criteria, and a list of anti-patterns to avoid. The focus is fast, repeatable decisions and verifiable outcomes for AI agents and automation pipelines.

How this skill works

The skill exposes a compact passive knowledge base that defines key terminology and domain constraints used by the agent. It implements a decision logic layer: stepwise rules the agent follows to choose actions, escalate, or request clarification. A success-criteria component provides concrete checks and observable signals to confirm goals were met. An anti-pattern list prevents common failure modes and helps the agent self-correct.

When to use it

  • When you need a repeatable decision protocol for routine tasks
  • When outcomes must be verifiable with objective checks
  • When integrating an AI agent into an existing workflow with clear pass/fail criteria
  • When you want to reduce ambiguous or inconsistent agent behavior
  • When onboarding new automation or QA processes

Best practices

  • Keep the knowledge base minimal and domain-specific to avoid drift
  • Encode decision steps as simple, ordered rules with explicit fallbacks
  • Define success criteria as measurable checks or assertions, not vague goals
  • Use anti-patterns proactively in validation and post-action audits
  • Log decision steps and failed checks to improve the knowledge base over time

Example use cases

  • Automated triage: route tasks to teams based on fixed rules and validation checks
  • Form validation: apply decision logic to accept, reject, or request corrections with clear pass/fail tests
  • Content moderation: follow ordered rules and success criteria to flag or clear items
  • Onboarding flows: guide stepwise configuration with checks at each stage to ensure readiness
  • Regression guarding: run measurable assertions after changes to detect regressions early

FAQ

How do I know when to escalate versus retry?

Escalate when a decision step hits a defined failure threshold or when required inputs are missing after a bounded number of retries. Retry only for transient errors and record each attempt.

What makes a good success criterion?

A good criterion is objective, measurable, and reproducible—e.g., an API returns 200 and expected schema, a field matches a regex, or a task completes within a bounded time.