home / skills / aidotnet / moyucode / prompt-engineer

prompt-engineer skill

/skills/community/prompt-engineer

This skill designs and optimizes AI prompts using structured templates, few-shot and chain-of-thought techniques to improve model performance.

npx playbooks add skill aidotnet/moyucode --skill prompt-engineer

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.8 KB
---
name: prompt-engineer
description: 设计、优化和测试AI模型提示词,使用系统化的提示工程技术,包括少样本学习、思维链和结构化输出。
metadata:
  short-description: 设计高效的AI提示词
---

# Prompt Engineer Skill

## Description
Design and optimize prompts for AI models using proven techniques.

## Trigger
- `/prompt` command
- User requests prompt design
- User needs AI prompt optimization

## Prompt

You are a prompt engineering expert that creates effective AI prompts.

### System Prompt Template

```markdown
You are a [ROLE] that [PRIMARY_FUNCTION].

## Core Responsibilities
1. [Responsibility 1]
2. [Responsibility 2]
3. [Responsibility 3]

## Guidelines
- Always [guideline 1]
- Never [guideline 2]
- When uncertain, [fallback behavior]

## Output Format
[Specify exact format expected]

## Examples
[Provide 2-3 examples of ideal responses]
```

### Few-Shot Learning

```markdown
Classify the sentiment of customer reviews.

Examples:
Review: "This product exceeded my expectations! Fast shipping too."
Sentiment: positive

Review: "Broke after one week. Complete waste of money."
Sentiment: negative

Review: "It works as described. Nothing special."
Sentiment: neutral

Now classify:
Review: "{user_input}"
Sentiment:
```

### Chain-of-Thought

```markdown
Solve this step by step:

Problem: A store has 150 apples. They sell 40% on Monday and 30 more on Tuesday. How many remain?

Let me think through this:
1. Starting amount: 150 apples
2. Monday sales: 150 × 0.40 = 60 apples sold
3. After Monday: 150 - 60 = 90 apples
4. Tuesday sales: 30 apples sold
5. After Tuesday: 90 - 30 = 60 apples

Answer: 60 apples remain
```

### Structured Output

```markdown
Extract information from the text and return as JSON.

Text: "John Smith, age 32, works as a software engineer at Google in Mountain View. He can be reached at [email protected]."

Output format:
{
  "name": "string",
  "age": number,
  "occupation": "string",
  "company": "string",
  "location": "string",
  "email": "string"
}

Response:
{
  "name": "John Smith",
  "age": 32,
  "occupation": "software engineer",
  "company": "Google",
  "location": "Mountain View",
  "email": "[email protected]"
}
```

### Role-Based Prompting

```markdown
You are an expert code reviewer with 15 years of experience in TypeScript and React. You have a keen eye for:
- Performance bottlenecks
- Security vulnerabilities
- Code maintainability
- Best practices violations

When reviewing code:
1. First identify critical issues that could cause bugs or security problems
2. Then note performance concerns
3. Finally suggest style improvements

Always explain WHY something is an issue, not just WHAT is wrong.
```

## Tags
`prompts`, `ai`, `llm`, `optimization`, `templates`

## Compatibility
- Codex: ✅
- Claude Code: ✅

Overview

This skill helps design, optimize, and test prompts for AI models using systematic prompt engineering techniques. It applies few-shot learning, chain-of-thought reasoning, role-based framing, and structured output templates to produce reliable, reproducible prompts. The goal is higher model accuracy, clearer outputs, and faster iteration for production use.

How this skill works

The skill generates tailored system and user prompt templates, injects examples for few-shot learning, and adds explicit output schemas for structured responses. It can introduce chain-of-thought scaffolding when stepwise reasoning is needed and crafts role-based instructions to bias model behavior toward a target persona or function. It also recommends test cases and evaluation prompts to validate and refine prompt performance.

When to use it

  • Creating initial prompts for a new AI-powered feature or agent
  • Improving accuracy or consistency of model outputs across inputs
  • Designing structured JSON or table outputs for downstream processing
  • Guiding models to follow a specific persona or review checklist
  • Diagnosing why a model produces hallucinations or format drift

Best practices

  • Start with a clear system role and concise core responsibilities to reduce ambiguity
  • Use 2–5 high-quality examples for few-shot learning rather than many noisy ones
  • Explicitly specify output format and validation rules to prevent format errors
  • Prefer short, deterministic instructions for extraction tasks; enable chain-of-thought only when transparency is needed
  • Iteratively test with edge cases and measure performance with concrete metrics (accuracy, parse rate)

Example use cases

  • Design a prompt that extracts contact information as validated JSON from free text
  • Create a code-review persona prompt that prioritizes security and performance findings
  • Optimize a customer-support triage prompt to classify tickets into priority buckets
  • Build a stepwise math-reasoning prompt that explains each calculation before answering
  • Convert vague business requirements into a structured checklist for an LLM-driven workflow

FAQ

How many examples should I include for few-shot prompts?

Use 2–5 representative, high-quality examples that cover common variations; avoid large sets of inconsistent examples.

When should I enable chain-of-thought?

Enable it when you need transparency or stepwise reasoning for complex tasks; disable it for short deterministic extraction to reduce verbosity and unpredictability.