home / skills / enoch-robinson / agent-skill-collection / prompt-engineer

prompt-engineer skill

/skills/ai/prompt-engineer

This skill helps you craft high-quality prompts for LLMs, enabling clear context, structured outputs, and iterative improvements.

npx playbooks add skill enoch-robinson/agent-skill-collection --skill prompt-engineer

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
3.3 KB
---
name: prompt-engineer
description: Prompt 工程最佳实践指南。当用户需要优化 AI 提示词、设计系统提示、提升 LLM 输出质量、或构建 AI 应用的提示策略时使用此技能。
---

# Prompt Engineer

掌握与大语言模型高效沟通的艺术,设计出能够产生高质量、一致性输出的提示词。

## 核心原则

1. **清晰具体**:明确说明你想要什么
2. **提供上下文**:给模型足够的背景信息
3. **结构化输出**:指定期望的输出格式
4. **迭代优化**:持续测试和改进

## Prompt 结构模板

### 基础结构

```
[角色定义]
你是一个{专业领域} 专家,擅长 {具体技能}。

[任务描述]
请帮我{具体任务}。

[上下文信息]
背景:{相关背景}
约束:{限制条件}

[输出要求]
请以 {格式} 输出,包含 {具体要素}。

[示例](可选)
输入:{示例输入}
输出:{示例输出}
```

## 关键技巧

### 1. 角色设定(Role Prompting)

```
你是一位有10 年经验的高级后端工程师,专注于:
- 分布式系统设计
- 性能优化
- 代码质量把控

请以这个角色审查以下代码...
```

### 2. 少样本学习(Few-shot Learning)

```
将用户反馈分类为:正面、负面、中性

示例1:
输入:"这个产品太棒了!"
输出:正面

示例2:
输入:"送货太慢了,很失望"
输出:负面

现在分类:
输入:"产品还行,价格有点贵"
输出:
```

### 3. 思维链(Chain of Thought)

```
请一步步分析这个问题:

1. 首先,理解问题的核心是什么
2. 然后,列出可能的解决方案
3. 接着,评估每个方案的优缺点
4. 最后,给出推荐方案和理由
```

### 4. 输出格式控制

```
请以 JSON 格式返回,结构如下:
{
  "summary": "简要总结",
  "key_points": ["要点1", "要点2"],
  "recommendation": "建议",
  "confidence": 0.0-1.0
}
```

## System Prompt 设计

### 通用模板

```
## 角色
你是 {角色名称},{角色描述}。

## 能力
- {能力1}
- {能力2}

## 行为准则
- {准则1}
- {准则2}

## 限制
- 不要 {限制1}
- 避免 {限制2}

## 输出风格
{风格描述}
```

### 代码助手示例

```
## 角色
你是一个专业的编程助手。

## 行为准则
- 代码要有清晰的注释
- 优先考虑可读性和可维护性
- 主动指出潜在问题
- 解释关键设计决策

## 输出格式
1. 先简要说明方案
2. 提供完整代码
3. 解释关键部分
4. 列出注意事项
```

## 常见问题与解决

| 问题 | 解决方案 |
|------|----------|
| 输出太长 | 添加字数限制:"请在200字以内" |
| 输出不一致 | 提供更多示例,明确格式要求 |
| 理解偏差 | 分解任务,逐步确认 |
| 幻觉问题 | 要求引用来源,添加"如不确定请说明" |

## 高级技巧

### 1. 自我反思提示

```
完成任务后,请:
1. 检查是否满足所有要求
2. 指出可能的改进点
3. 评估置信度(1-10)
```

### 2. 约束边界

```
重要约束:
- 只使用提供的信息
- 不确定时明确说明
- 不要编造数据或引用
```

## 参考资源

- Anthropic Prompt Engineering: https://docs.anthropic.com/claude/docs/prompt-engineering
- OpenAI Best Practices: https://platform.openai.com/docs/guides/prompt-engineering

Overview

This skill is a practical guide to prompt engineering best practices for designing high-quality prompts, system messages, and prompt strategies for AI applications. It helps users structure prompts, control output formats, and reduce hallucinations. The guidance is hands-on and suited for developers, product managers, and AI practitioners seeking consistent LLM outputs.

How this skill works

The skill inspects prompts and system messages, evaluates clarity, context, and output constraints, and suggests concrete rewrites or templates. It provides role-based and few-shot templates, chain-of-thought patterns, and JSON/structured-output controls. It also includes diagnostic techniques to iterate and measure prompt performance.

When to use it

  • Designing system prompts for an AI assistant or product
  • Improving output consistency and reducing hallucinations
  • Creating few-shot examples and role definitions for specialized tasks
  • Specifying output formats (JSON, CSV, bullet lists) for downstream parsing
  • Optimizing prompts for classification, summarization, or code generation

Best practices

  • Be explicit: state role, task, context, constraints, and exact output format
  • Provide minimal but sufficient context; include examples for ambiguous tasks
  • Use role prompting to set persona and capabilities when domain expertise is needed
  • Iterate: test variations, measure outputs, and refine prompts based on failure modes
  • Constrain hallucinations: require citations, limit scope to provided info, and ask model to admit uncertainty
  • Control structure: request machine-parseable formats (JSON schema) and include examples

Example use cases

  • Customer support: craft system prompt + few-shot examples to classify and triage tickets
  • Code review: role-based prompt that requests annotated diffs, potential bugs, and improvements
  • Summarization: structured prompts that return concise summary, key points, and confidence score in JSON
  • Data extraction: specify field schema and examples so the model returns consistent, parseable records
  • Product design: use chain-of-thought template to generate options, pros/cons, and recommended next steps

FAQ

How do I prevent the model from inventing facts?

Constrain the task: ask the model to use only provided context, require citations or explicit "I don't know" when unsure, and include a verification step in the prompt.

When should I use few-shot examples versus detailed instructions?

Use few-shot examples when the desired output style or classification boundary is subtle; use detailed instructions when the task is deterministic and format constraints are primary.