home / skills / git-fg / thecattoolkit / architecting-prompts

This skill guides designing and auditing AI prompts using 2026 guidance, attention management, and XML/Markdown matrices to ensure quality and consistency.

npx playbooks add skill git-fg/thecattoolkit --skill architecting-prompts

Review the files below or copy the command above to add this skill to your agents.

Files (30)
SKILL.md
3.1 KB
---
name: architecting-prompts
description: "Applies 2026 Complexity-Based Guidance standards with Attention Management, Sycophancy Prevention, and XML/Markdown decision matrix. Provides theory, patterns, and quality evaluation criteria for AI prompt design. Use when designing, optimizing, or auditing AI prompts, system instructions, or multi-stage chains. Do not use for generating prompt files, basic conversational AI, or single-step interactions."
allowed-tools: [Read, Write, Edit, Glob, Grep]
---

# Prompt Architecture & Design Standards

## Operational Protocol

1. **Analyze Intent**: Determine if the goal is Drafting, Optimizing, or Auditing a prompt.
2. **Consult Standards**: PROACTIVELY load `references/core-standards.md` for Attention Management rules.
3. **Select Pattern (Signal-to-Noise Rule)**:
   - **Markdown-First** (Default): Use for 80% of tasks.
   - **Hybrid XML**: Use ONLY if:
     - Data Isolation (>50 lines raw data)
     - Strict Constraints (NEVER/MUST rules)
     - Internal Monologue (Complex reasoning)
4. **Apply Theory**: Use `references/optimization.md` for refinement workflows.
5. **Verify**: Apply `references/quality.md` gates before final output.

## Core Principles (Quick Reference)

### Attention Management
Use Markdown headers for hierarchy. XML tags (Max 15, No Nesting) ONLY for semantic data isolation or thinking scaffolding.

### Sycophancy Prevention (Truth-First)

If user suggests flawed path → CONTRADICT immediately. No "Great idea!" or superlatives. Speak in code, files, commands.

### Signal-to-Noise Rule
- **Default**: Markdown (80% of prompts) - fewer tokens, Claude-native
- **Upgrade to XML/Markdown hybrid** only when:
  - Data Isolation: >50 lines of raw data
  - Constraint Weight: NEVER/MUST rules that cannot be broken
  - Internal Monologue: Complex reasoning requiring step-by-step

## Knowledge Index (Progressive Disclosure)

| Reference | Purpose | Load When |
|:----------|:--------|:----------|
| **core-standards.md** | Attention, Sycophancy, Quota, XML/MD matrix | ALWAYS consult first |
| **design-patterns.md** | CoT, Few-Shot, Taxonomy, Structural patterns | Selecting technique |
| **optimization.md** | Systematic refinement workflow | Improving existing prompts |
| **quality.md** | Production quality gates | Final verification |
| **anti-patterns.md** | Common mistakes to avoid | Prevention |
| **taxonomy.md** | Single vs Chain vs Meta categorization | Storage/planning |
| **execution-protocol.md** | Standard completion reporting | Structured output |

## Design Patterns

### Approved Patterns
- **Chain of Thought (CoT)**
- **Few-Shot Learning**
- **Structured Output**
- **Constraint Encoding**

## Success Criteria

A prompt meets 2026 standards when:
- [ ] Uses Markdown headers for hierarchy (default)
- [ ] XML tags are < 15 and never nested
- [ ] Instructions are specific, actionable, and truth-focused
- [ ] Examples (if any) are isolated in `<example>` tags
- [ ] Reasoning is isolated in `<thinking>` blocks (if needed)
- [ ] Quality gate checklist is included
- [ ] Output format is clearly specified

**Note**: For generating .md prompt files for Claude-to-Claude pipelines, use `generating-prompts` skill.

Overview

This skill applies the 2026 Complexity-Based Guidance standards to design, optimize, and audit AI prompts and multi-stage chains. It enforces Attention Management, prevents sycophancy, and uses a Markdown/XML decision matrix to choose the correct representation. Use it when you need production-grade prompt architecture, not for one-off conversational replies or simple prompt generation tasks.

How this skill works

First the skill classifies intent as Drafting, Optimizing, or Auditing. It then loads core attention and quality standards and selects a pattern using the Signal-to-Noise rule (Markdown-first; upgrade to XML/Hybrid only for large raw data, strict NEVER/MUST constraints, or internal monologue). It applies optimization workflows and runs quality gates to produce actionable, truth-focused instructions and a final checklist.

When to use it

  • Designing multi-step prompt chains or system instructions
  • Optimizing existing prompts for reliability and lower hallucination risk
  • Auditing prompts against attention and sycophancy standards
  • Preparing prompts that include constrained outputs or structured data
  • Isolating complex internal reasoning that requires stepwise scaffolding

Best practices

  • Default to Markdown headers for hierarchy; reserve XML tags for data isolation or thinking scaffolds
  • Limit XML: fewer than 15 tags and avoid nesting
  • Immediately contradict suggested flawed paths; avoid flattering/sycophantic phrasing
  • Isolate examples in <example> tags and internal reasoning in <thinking> when required
  • Include a short quality gate checklist with every deliverable

Example use cases

  • Draft a multi-stage instruction set for an agent that must follow strict NEVER/MUST rules using XML/Markdown hybrid
  • Optimize a prompt to reduce token cost while preserving clarity using Markdown-first pattern
  • Audit a chain-of-thought prompt to ensure reasoning is isolated and non-sycophantic
  • Convert an ambiguous single-step prompt into a structured-template prompt with explicit output schema
  • Design few-shot examples isolated in <example> tags for robust structured output

FAQ

When should I choose XML/Hybrid over Markdown?

Use XML/Hybrid only if you have more than ~50 lines of raw data to isolate, strict NEVER/MUST constraints that cannot be represented safely in free text, or when you must capture internal monologue/stepwise reasoning.

How do I prevent sycophancy in prompts?

Enforce truth-first language: explicitly instruct the model to contradict flawed user suggestions, ban praise or superlatives, and provide factual checks. Express rules as commands and include verification steps in the quality gate.