home / skills / 89jobrien / steve / prompt-optimization

prompt-optimization skill

/steve/skills/prompt-optimization

This skill optimizes prompts for LLMs and AI systems, improving response quality through structured design, few-shot learning, and clear output formats.

npx playbooks add skill 89jobrien/steve --skill prompt-optimization

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
3.0 KB
---
name: prompt-optimization
description: Expert prompt optimization for LLMs and AI systems. Use when building
  AI features, improving agent performance, crafting system prompts, or optimizing
  LLM interactions. Masters prompt patterns and techniques.
author: Joseph OBrien
status: unpublished
updated: '2025-12-23'
version: 1.0.1
tag: skill
type: skill
---

# Prompt Optimization

This skill optimizes prompts for LLMs and AI systems, focusing on effective prompt patterns, few-shot learning, and optimal AI interactions.

## When to Use This Skill

- When building AI features or agents
- When improving LLM response quality
- When crafting system prompts
- When optimizing agent performance
- When implementing few-shot learning
- When designing AI workflows

## What This Skill Does

1. **Prompt Design**: Creates effective prompts with clear structure
2. **Few-Shot Learning**: Implements few-shot examples for better results
3. **Chain-of-Thought**: Uses reasoning patterns for complex tasks
4. **Output Formatting**: Specifies clear output formats
5. **Constraint Setting**: Sets boundaries and constraints
6. **Performance Optimization**: Improves prompt efficiency and results

## How to Use

### Optimize Prompt

```
Optimize this prompt for better results
```

```
Create a system prompt for a code review agent
```

### Specific Patterns

```
Implement few-shot learning for this task
```

## Prompt Techniques

### Structure

**Clear Sections:**

- Role definition
- Task description
- Constraints and boundaries
- Output format
- Examples

### Few-Shot Learning

**Pattern:**

- Provide 2-3 examples
- Show input-output pairs
- Demonstrate desired style
- Include edge cases

### Chain-of-Thought

**Approach:**

- Break down complex tasks
- Show reasoning steps
- Encourage step-by-step thinking
- Verify intermediate results

## Examples

### Example 1: Code Review Prompt

**Input**: Create optimized code review prompt

**Output**:

```markdown
## Optimized Prompt: Code Review

### The Prompt
```

You are an expert code reviewer with 10+ years of experience.

Review the provided code focusing on:

1. Security vulnerabilities
2. Performance optimizations
3. Code maintainability
4. Best practices

For each issue found, provide:

- Severity level (Critical/High/Medium/Low)
- Specific line numbers
- Explanation of the issue
- Suggested fix with code example

Format your response as a structured report with clear sections.

```

### Techniques Used
- Role-playing for expertise
- Clear evaluation criteria
- Specific output format
- Actionable feedback requirements
```

## Best Practices

### Prompt Design

1. **Be Specific**: Clear, unambiguous instructions
2. **Provide Examples**: Show desired output format
3. **Set Constraints**: Define boundaries clearly
4. **Iterate**: Test and refine prompts
5. **Document**: Keep track of effective patterns

## Related Use Cases

- AI agent development
- LLM optimization
- System prompt creation
- Few-shot learning implementation
- AI workflow design

Overview

This skill provides expert prompt optimization for large language models and AI systems. It helps design concise, high-performing prompts, implement few-shot patterns, and set constraints to produce reliable outputs. Use it to improve agent behavior, response quality, and task-specific performance.

How this skill works

The skill inspects prompt structure, clarity, examples, and output formatting, then suggests revisions that reduce ambiguity and guide model reasoning. It applies patterns like role definitions, few-shot examples, chain-of-thought scaffolding, and explicit constraints to boost consistency and efficiency. Recommendations include concrete prompt rewrites, example pairs, and evaluation criteria.

When to use it

  • Building AI features or conversational agents
  • Improving LLM response relevance and reliability
  • Crafting system or initializer prompts
  • Implementing few-shot learning for niche tasks
  • Optimizing agent performance and cost-efficiency

Best practices

  • Define a clear role and task upfront to set model context
  • Include 2–3 curated examples showing input-output style and edge cases
  • Specify exact output format (JSON, bullet list, tables) and validation rules
  • Break complex tasks into step-by-step reasoning or chain-of-thought prompts
  • Iterate: test prompts, collect failure cases, and refine examples

Example use cases

  • Create a system prompt for a code review agent with explicit checklists and output schema
  • Design few-shot examples to teach a model domain-specific classification labels
  • Rewrite user-facing prompts to reduce hallucinations and improve factuality
  • Set constraints and safety filters for content moderation agents
  • Optimize prompts to reduce token usage while retaining output quality

FAQ

How many examples should I include for few-shot learning?

Start with 2–3 high-quality examples that cover common cases and an edge case; add more only if diversity of inputs demands it.

When should I use chain-of-thought versus concise instructions?

Use chain-of-thought for complex, multi-step reasoning tasks where intermediate verification helps; prefer concise, constrained prompts for simple extraction or formatting tasks.