home / skills / zpankz / mcp-skillset / prompt-engineering

prompt-engineering skill

/prompt-engineering

This skill helps you design, test, and optimize prompts for agents and sub-agents, improving reliability and output quality across LLM interactions.

npx playbooks add skill zpankz/mcp-skillset --skill prompt-engineering

Review the files below or copy the command above to add this skill to your agents.

Files (3)
SKILL.md
1.8 KB
---
name: prompt-engineering
description: Use this skill when you writing commands, hooks, skills for Agent, or prompts for sub agents or any other LLM interaction, including optimizing prompts, improving LLM outputs, or designing production prompt templates.
---

# Prompt Engineering Patterns

Advanced prompt engineering techniques to maximize LLM performance, reliability, and controllability.

## Overview

Effective prompt engineering combines structured patterns, iterative optimization, and psychological principles to achieve consistent, high-quality LLM outputs. This skill covers core capabilities, key patterns, best practices, and production-ready templates.

## Core Capabilities

1. **Few-Shot Learning**: Teach by showing examples (2-5 input-output pairs)
2. **Chain-of-Thought Prompting**: Request step-by-step reasoning
3. **Prompt Optimization**: Systematically improve through testing
4. **Template Systems**: Build reusable prompt structures
5. **System Prompt Design**: Set global behavior and constraints

## When to Use

Use prompt engineering when:
- Writing commands, hooks, or skills for agents
- Designing prompts for sub-agents
- Optimizing LLM interactions
- Building production prompt templates
- Improving output consistency and reliability

## Progressive Loading

**L2 Content** (loaded when patterns and practices needed):
- See: [references/patterns.md](./references/patterns.md)
  - Core Capabilities (detailed)
  - Key Patterns
  - Best Practices
  - Common Pitfalls
  - Integration Patterns
  - Performance Optimization

**L3 Content** (loaded when advanced techniques and examples needed):
- See: [references/advanced.md](./references/advanced.md)
  - The Seven Principles
  - Principle Combinations by Prompt Type
  - Psychology Behind Effective Prompts
  - Ethical Use Guidelines
  - Production Examples
  - Quick Reference

Overview

This skill teaches practical prompt engineering for building reliable, controllable LLM interactions used in agents, hooks, and sub-agents. It focuses on patterns, templates, and iterative optimization to produce consistent, high-quality outputs. The guidance combines few-shot techniques, chain-of-thought prompting, and system-level design for production use.

How this skill works

The skill inspects and refines prompt structure, applying patterns like few-shot examples, step-by-step reasoning prompts, and system prompts to enforce behavior. It provides reusable template systems and a workflow for prompt optimization: design, test, measure, and iterate. It also highlights integration points for agents, including hooks and sub-agent coordination strategies.

When to use it

  • Creating commands, hooks, or skills for agent platforms
  • Designing prompts for sub-agents or chained LLM workflows
  • Optimizing LLM output consistency and reliability
  • Building production-ready prompt templates and templates systems
  • Improving reasoning or traceability with chain-of-thought prompts

Best practices

  • Start with clear system-level instructions to set global constraints and tone
  • Use 2–5 curated input-output examples for few-shot tuning where appropriate
  • Request step-by-step reasoning only when you need transparency; summarize final answers afterward
  • Iteratively test prompts against realistic inputs and measure output quality
  • Modularize prompts into templates and variables for reuse and easier updates
  • Monitor hallucinations and add guardrails or verification steps for critical outputs

Example use cases

  • Authoring agent commands that require deterministic behavior and error handling
  • Designing a sub-agent that extracts structured data from free text using few-shot examples
  • Optimizing a prompt to reduce hallucinations in factual QA by adding verification steps
  • Creating a production template system that injects context, user preferences, and safety constraints
  • Building a multi-step workflow where chain-of-thought prompts improve intermediate reasoning quality

FAQ

How many examples should I include for few-shot teaching?

Start with 2–5 high-quality examples; add more only if diversity of cases requires it.

When should I request chain-of-thought vs. a concise answer?

Use chain-of-thought when you need interpretability or stepwise reasoning. Ask for a concise summary after the chain to produce a tidy final output.