home / skills / openclaw / skills / prompt-engineering-expert

prompt-engineering-expert skill

/skills/tomstools11/prompt-engineering-expert

This skill guides prompt engineering and optimization, helping you craft effective prompts, design system instructions, and improve performance.

npx playbooks add skill openclaw/skills --skill prompt-engineering-expert

Review the files below or copy the command above to add this skill to your agents.

Files (12)
SKILL.md
1.6 KB
---
name: prompt-engineering-expert
description: Advanced expert in prompt engineering, custom instructions design, and prompt optimization for AI agents
---

# Prompt Engineering Expert Skill

This skill equips Claude with deep expertise in prompt engineering, custom instructions design, and prompt optimization. It provides comprehensive guidance on crafting effective AI prompts, designing agent instructions, and iteratively improving prompt performance.

## Capabilities

- **Prompt Writing Best Practices**: Expert guidance on clear, direct prompts with proper structure and formatting
- **Custom Instructions Design**: Creating effective system prompts and custom instructions for AI agents
- **Prompt Optimization**: Analyzing, refining, and improving existing prompts for better performance
- **Advanced Techniques**: Chain-of-thought prompting, few-shot examples, XML tags, role-based prompting
- **Evaluation & Testing**: Developing test cases and success criteria for prompt evaluation
- **Anti-patterns Recognition**: Identifying and correcting common prompt engineering mistakes
- **Context Management**: Optimizing token usage and context window management
- **Multimodal Prompting**: Guidance on vision, embeddings, and file-based prompts

## Use Cases

- Refining vague or ineffective prompts
- Creating specialized system prompts for specific domains
- Designing custom instructions for AI agents and skills
- Optimizing prompts for consistency and reliability
- Teaching prompt engineering best practices
- Debugging prompt performance issues
- Creating prompt templates for reusable workflows

Overview

This skill packages advanced prompt engineering expertise for designing, testing, and optimizing prompts and custom instructions for AI agents. It focuses on practical techniques—including structure, few-shot examples, chain-of-thought, and context management—to improve reliability and control. The guidance suits builders who need repeatable, high-quality prompt workflows for production or research.

How this skill works

The skill inspects existing prompts and agent instructions, identifies weaknesses, and recommends concrete rewrites and templates. It applies evaluation criteria and test cases to measure performance, suggests token- and context-optimization tactics, and provides patterns for role-based and multimodal prompting. Outputs include improved prompt variants, success metrics, and reusable templates.

When to use it

  • When a prompt returns inconsistent or low-quality answers
  • Designing system prompts and custom instructions for new agents
  • Scaling prompts across domains or teams with reusable templates
  • Optimizing token use for large or multimodal context windows
  • Creating evaluation tests and success criteria for prompt changes

Best practices

  • Write clear, goal-oriented prompts with explicit output format and constraints
  • Use few-shot examples and role framing for complex tasks
  • Maintain concise system instructions separate from user-facing text
  • Iteratively test changes with small A/B style experiments and metrics
  • Monitor token usage and prioritize salient context to avoid window overflow

Example use cases

  • Refine a vague customer-support prompt to produce consistent step-by-step troubleshooting
  • Create a domain-specific system prompt for legal or medical draft generation
  • Convert a free-form workflow into a structured template with XML-like tags for parsable outputs
  • Design test cases to verify hallucination reduction after prompt changes
  • Optimize prompts for multimodal inputs by specifying how to reference images and embeddings

FAQ

Can this skill help reduce hallucinations?

Yes. It recommends factual grounding techniques, explicit constraints, source-citation prompts, and test cases to measure reduction in incorrect statements.

How do you evaluate prompt improvements?

Use targeted test cases, quantitative metrics (accuracy, consistency, token cost), and human review to compare baseline and revised prompts.