home / skills / zpankz / mcp-skillset / prompting

prompting skill

/prompting

This skill helps you optimize prompts and context loading to maximize signal and minimize tokens, improving AI agent guidance.

This is most likely a fork of the prompting skill from danielmiessler
npx playbooks add skill zpankz/mcp-skillset --skill prompting

Review the files below or copy the command above to add this skill to your agents.

Files (4)
SKILL.md
2.6 KB
---
name: prompting
description: Prompt engineering standards and context engineering principles for AI agents based on Anthropic best practices. Covers clarity, structure, progressive discovery, and optimization for signal-to-noise ratio.
---

# Prompting Skill

## When to Activate This Skill
- Prompt engineering questions
- Context engineering guidance
- AI agent design
- Prompt structure help
- Best practices for LLM prompts
- Agent configuration

## Core Philosophy
**Context engineering** = Curating optimal set of tokens during LLM inference

**Primary Goal:** Find smallest possible set of high-signal tokens that maximize desired outcomes

## Key Principles

### 1. Context is Finite Resource
- LLMs have limited "attention budget"
- Performance degrades as context grows
- Every token depletes capacity
- Treat context as precious

### 2. Optimize Signal-to-Noise
- Clear, direct language over verbose explanations
- Remove redundant information
- Focus on high-value tokens

### 3. Progressive Discovery
- Use lightweight identifiers vs full data dumps
- Load detailed info dynamically when needed
- Just-in-time information loading

## Markdown Structure Standards

Use clear semantic sections:
- **Background Information**: Minimal essential context
- **Instructions**: Imperative voice, specific, actionable
- **Examples**: Show don't tell, concise, representative
- **Constraints**: Boundaries, limitations, success criteria

## Writing Style

### Clarity Over Completeness
✅ Good: "Validate input before processing"
❌ Bad: "You should always make sure to validate..."

### Be Direct
✅ Good: "Use calculate_tax tool with amount and jurisdiction"
❌ Bad: "You might want to consider using..."

### Use Structured Lists
✅ Good: Bulleted constraints
❌ Bad: Paragraph of requirements

## Context Management

### Just-in-Time Loading
Don't load full data dumps - use references and load when needed

### Structured Note-Taking
Persist important info outside context window

### Sub-Agent Architecture
Delegate subtasks to specialized agents with minimal context

## Best Practices Checklist
- [ ] Uses Markdown headers for organization
- [ ] Clear, direct, minimal language
- [ ] No redundant information
- [ ] Actionable instructions
- [ ] Concrete examples
- [ ] Clear constraints
- [ ] Just-in-time loading when appropriate

## Anti-Patterns
❌ Verbose explanations
❌ Historical context dumping
❌ Overlapping tool definitions
❌ Premature information loading
❌ Vague instructions ("might", "could", "should")

## Supplementary Resources
For full standards: `read ${PAI_DIR}/skills/prompting/CLAUDE.md`

## Based On
Anthropic's "Effective Context Engineering for AI Agents"

Overview

This skill describes prompt engineering and context engineering practices based on Anthropic-inspired best practices. It teaches how to design concise, high-signal prompts and manage limited context to maximize LLM performance. The focus is on clarity, structure, progressive discovery, and optimizing signal-to-noise ratio.

How this skill works

It inspects prompt composition and context usage to identify wasted tokens and weak signals. It provides a structured template (background, instructions, examples, constraints) and tactics like just-in-time loading and sub-agent delegation. The skill recommends progressive disclosure of details and pruning redundant language to preserve the model's attention budget.

When to use it

  • Designing or reviewing prompts for LLMs and agents
  • Configuring agent context and tool interfaces
  • Reducing prompt length while preserving task fidelity
  • Creating multi-step agent workflows with sub-agents
  • Preparing examples and constraints for deterministic outputs

Best practices

  • Treat context as a limited resource; minimize tokens to high-value content
  • Use clear, imperative instructions and short sentences
  • Organize prompts into Background, Instructions, Examples, Constraints
  • Prefer identifiers and references; load full data only when needed
  • Persist long-term state outside the immediate context window
  • Delegate specialized tasks to sub-agents with minimal context

Example use cases

  • Condense a user story into a one-paragraph background and specific acceptance tests
  • Design an agent that asks clarifying questions before requesting large datasets
  • Create a tool spec using short imperative lines and explicit input/output fields
  • Build a multi-agent pipeline where one agent summarizes and others act on summaries
  • Replace verbose examples with compact representative cases and edge-case constraints

FAQ

How do I choose what to keep in the context window?

Keep only tokens that directly affect the decision or output. Prefer concise goals, required inputs, and a few representative examples; reference or fetch large data on demand.

When should I use progressive discovery?

Use it whenever initial prompts can benefit from clarification or when full data is large. Start with lightweight identifiers, then load details after a confirmation or request.

What tone and phrasing work best?

Use direct, imperative voice. State actions plainly (e.g., 'Validate input', 'Use tool X with params Y') and avoid hedging words like might/could/should.