home / skills / eddiebe147 / claude-settings / prompt-engineer

prompt-engineer skill

/skills/prompt-engineer

This skill helps you craft and optimize prompts for AI systems, improving clarity, context, and results across workflows.

npx playbooks add skill eddiebe147/claude-settings --skill prompt-engineer

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
5.5 KB
---
name: Prompt Engineer
slug: prompt-engineer
description: Craft effective prompts and optimize AI interactions for better results
category: meta
complexity: simple
version: "1.0.0"
author: "ID8Labs"
triggers:
  - "optimize this prompt"
  - "improve this prompt"
  - "craft a prompt"
  - "design a prompt"
  - "prompt engineering"
tags:
  - prompt-engineering
  - optimization
  - AI-interaction
---

# Prompt Engineer

The Prompt Engineer skill helps you craft, refine, and optimize prompts for Claude Code and other AI systems. It applies proven prompt engineering principles including clarity, specificity, context provision, and structural best practices to transform vague requests into effective AI instructions.

This skill analyzes existing prompts for weaknesses, suggests improvements based on prompt engineering research, and helps you build prompt libraries for recurring tasks. It's particularly valuable when you need consistent, high-quality AI outputs or want to maximize the effectiveness of complex multi-step AI workflows.

Whether you're creating one-off prompts or building reusable templates, this skill ensures your AI interactions are clear, actionable, and produce the results you need.

## Core Workflows

### Workflow 1: Analyze & Optimize Existing Prompt
1. **Receive** the current prompt from user
2. **Analyze** against prompt engineering principles:
   - Clarity: Is the request unambiguous?
   - Specificity: Are outputs well-defined?
   - Context: Is necessary background provided?
   - Structure: Is the prompt well-organized?
   - Constraints: Are limitations clearly stated?
3. **Identify** weaknesses and improvement opportunities
4. **Provide** optimized version with explanations
5. **Test** improved prompt if requested
6. **Iterate** based on results

### Workflow 2: Design New Prompt from Scratch
1. **Clarify** the goal: What outcome is needed?
2. **Gather** requirements:
   - Target AI system capabilities
   - Output format requirements
   - Domain context needed
   - Edge cases to handle
3. **Structure** the prompt using proven patterns:
   - Role/persona if beneficial
   - Clear task description
   - Specific constraints and requirements
   - Output format specification
   - Examples if complex
4. **Draft** initial version
5. **Refine** for clarity and completeness
6. **Document** usage guidelines

### Workflow 3: Build Prompt Template Library
1. **Identify** recurring prompt patterns in workflow
2. **Extract** reusable components
3. **Parameterize** variable elements
4. **Document** template with:
   - Purpose and use cases
   - Parameter descriptions
   - Example usage
   - Expected outputs
5. **Test** template with multiple scenarios
6. **Store** in organized library structure

## Quick Reference

| Action | Command/Trigger |
|--------|-----------------|
| Optimize existing prompt | "Optimize this prompt: [prompt]" |
| Design new prompt | "Design a prompt for [goal]" |
| Review prompt quality | "Review this prompt: [prompt]" |
| Create template | "Create a prompt template for [use case]" |
| Apply best practices | "Apply prompt engineering best practices to [prompt]" |
| Fix prompt issues | "This prompt isn't working well: [prompt]" |

## Best Practices

- **Be Specific**: Replace vague terms with concrete requirements
  - Bad: "Make it better"
  - Good: "Increase response accuracy by providing 3 cited examples"

- **Provide Context**: Give AI the background it needs
  - Include: Domain knowledge, target audience, constraints
  - Example: "For a technical audience familiar with React..."

- **Structure Clearly**: Use formatting to organize complex prompts
  - Sections, bullets, numbered steps
  - Clear delineation between instructions and examples

- **Define Success**: Specify what good output looks like
  - Format requirements (JSON, markdown, etc.)
  - Length constraints
  - Quality criteria

- **Use Examples**: Show don't just tell for complex outputs
  - Provide 1-3 examples of desired output
  - Include edge cases if relevant

- **Iterate**: Prompts improve through testing
  - Start simple, add complexity as needed
  - Test with edge cases
  - Refine based on actual outputs

- **Separate Concerns**: Don't mix multiple requests
  - One clear goal per prompt
  - Chain prompts for multi-step workflows

- **Constrain Appropriately**: Set boundaries without over-constraining
  - Specify limits (word count, format)
  - Allow flexibility where creativity helps

## Advanced Techniques

### Chain-of-Thought Prompting
Encourage step-by-step reasoning by asking AI to "think through" problems:
```
Before providing the final answer, work through:
1. What are the key factors?
2. What are the trade-offs?
3. What does the evidence suggest?
Then provide your conclusion.
```

### Few-Shot Learning
Provide examples of input-output pairs:
```
Example 1: [input] → [output]
Example 2: [input] → [output]
Now apply the same pattern to: [new input]
```

### Role-Based Prompting
Assign expertise or perspective:
```
As a senior React architect with 10 years of experience,
review this component for performance issues...
```

### Constraint-Based Refinement
Use specific constraints to shape output:
```
Requirements:
- Maximum 3 paragraphs
- Include code examples
- Cite sources
- Use beginner-friendly language
```

## Common Pitfalls to Avoid

- Assuming context the AI doesn't have
- Being too vague about desired output format
- Mixing multiple unrelated requests
- Over-complicating simple requests
- Not specifying constraints until after receiving output
- Forgetting to provide examples for complex patterns
- Using ambiguous language or jargon without definition

Overview

This skill helps you craft, refine, and optimize prompts to get more reliable and useful outputs from Claude Code and other AI systems. It applies practical prompt engineering principles—clarity, specificity, context, structure, and constraints—to turn vague requests into actionable instructions. Use it to speed up prompt iterations, build reusable templates, and standardize AI interactions across projects.

How this skill works

Submit an existing prompt or describe a desired outcome. The skill inspects the prompt for clarity, specificity, context coverage, structure, and constraint definition, then highlights weaknesses and suggests concrete edits. It can draft new prompts from goals and requirements, generate parameterized templates for recurring tasks, and provide testable variants to iterate until results meet your success criteria.

When to use it

  • You get inconsistent or low-quality AI outputs
  • You need a reusable prompt template for recurring tasks
  • You’re designing a multi-step or chained workflow
  • You want to convert vague goals into precise AI instructions
  • You need to enforce output format, length, or citation rules

Best practices

  • Start with a clear goal and success criteria (format, length, quality).
  • Provide necessary domain context and define the audience.
  • Be specific and constrain outputs where needed (JSON, word count, sections).
  • Use examples (1–3) and edge cases for complex patterns.
  • Keep one primary objective per prompt; chain prompts for multi-step tasks.

Example use cases

  • Optimize a customer support prompt to produce concise, role-specific replies with suggested follow-ups.
  • Design a prompt to generate production-ready React component code with performance notes.
  • Create a parameterized template for weekly market summaries in JSON for ingestion into dashboards.
  • Build a chain-of-thought prompt for complex decision analysis that shows reasoning steps then a final recommendation.
  • Convert human requirements into a strict testable prompt that returns CSV or JSON for automation.

FAQ

Can you test improved prompts automatically?

Yes. I can generate test variants and example inputs to simulate outputs, but live testing depends on the target AI endpoint you use.

How do you handle prompts for different AI models?

I adapt wording and constraints to model capabilities and typical token behavior, and I recommend model-specific design patterns when necessary.

Will this make a prompt too rigid and reduce creativity?

I balance constraints and creative freedom by recommending where to be strict (format, critical checks) and where to allow flexibility (tone, examples).