home / skills / phrazzld / claude-config / llm-communication

llm-communication skill

/skills/llm-communication

This skill helps you craft effective LLM prompts and agent instructions using role, objective, and latitude.

npx playbooks add skill phrazzld/claude-config --skill llm-communication

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
3.3 KB
---
name: llm-communication
description: "Write effective LLM prompts, commands, and agent instructions. Goal-oriented over step-prescriptive. Role + Objective + Latitude pattern. Use when writing prompts, designing agents, building Claude Code commands, or reviewing LLM instructions. Keywords: prompt engineering, agent design, command writing."
effort: high
---

# Talking to LLMs

This skill helps you write effective prompts, commands, and agent instructions.

## Core Principle

LLMs are intelligent agents, not script executors. Talk to them like senior engineers.

## Anti-Patterns

### Over-Prescriptive Instructions

Bad:
```
Step 1: Run `sentry-cli issues list --status unresolved`
Step 2: Parse the JSON output
Step 3: For each issue, calculate priority score using formula...
Step 4: Select highest priority issue
Step 5: Run `git log --since="24 hours ago"`
...700 more lines
```

This treats the LLM like a bash script executor. It's brittle, verbose, and removes the LLM's ability to adapt.

### Excessive Hand-Holding

Bad:
```
If the user says X, do Y.
If the user says Z, do W.
Handle edge case A by doing B.
Handle edge case C by doing D.
```

You can't enumerate every case. Trust the LLM to generalize.

### Defensive Over-Specification

Bad:
```
IMPORTANT: Do NOT do X.
WARNING: Never do Y.
CRITICAL: Always remember to Z.
```

If you need 10 warnings, your instruction is probably wrong.

## Good Patterns

### State the Goal, Not the Steps

Good:
```
Investigate production errors. Check all available observability (Sentry, Vercel, logs).
Correlate with git history. Find root cause. Propose fix.
```

Let the LLM figure out how.

### Provide Context, Not Constraints

Good:
```
You're a senior SRE investigating an incident.
The user indicated something broke around 14:57.
```

Frame the situation, don't micromanage the response.

### Trust Recovery

Good:
```
Trust your judgment. If something doesn't work, try another approach.
```

LLMs can recover from errors. Let them.

### Role + Objective + Latitude

The best prompts follow this pattern:
1. **Role**: Who is the LLM in this context?
2. **Objective**: What's the end goal?
3. **Latitude**: How much freedom do they have?

Example:
```
You're a senior engineer reviewing this PR.           # Role
Find bugs, security issues, and code smells.          # Objective
Be direct. If it's fine, say so briefly.              # Latitude
```

## When Writing Claude Code Commands

Commands are prompts. The same rules apply:

**Bad command (700 lines):**
- Exhaustive decision trees
- Exact CLI commands to copy
- Every edge case enumerated
- No room for judgment

**Good command (20 lines):**
- Clear objective
- Context about what tools exist
- Permission to figure it out
- Trust in agent judgment

## When Building Agentic Systems

Same principles scale up:

**Bad agent design:**
- Rigid state machines
- Exhaustive action lists
- No error recovery
- Brittle integrations

**Good agent design:**
- Goal-oriented
- Self-correcting
- Minimal constraints
- Natural language interfaces

## The Test

Before finalizing any LLM instruction, ask:

> "Would I give these instructions to a senior engineer?"

If you'd be embarrassed to hand a colleague a 700-line runbook for a simple task, don't give it to the LLM either.

## Remember

The L in LLM stands for Language. Use it.

Overview

This skill teaches how to write effective LLM prompts, commands, and agent instructions using a goal-oriented pattern. It emphasizes Role + Objective + Latitude and avoids over-prescription, excessive hand-holding, and defensive noise. Use it to produce concise, adaptable prompts and agent designs that let models apply judgment.

How this skill works

The skill inspects prompt structure and replaces step-by-step scripts with goal statements and contextual framing. It evaluates commands and agent specs for brittleness, recommending minimal constraints, explicit roles, clear objectives, and defined latitude. It also translates those recommendations into compact Claude Code commands or agent templates.

When to use it

  • Writing prompts for single-turn or multi-turn LLM interactions
  • Designing agent behaviors, actions, and recovery strategies
  • Authoring Claude Code commands or other LLM-operated commands
  • Reviewing existing instructions for brittleness or verbosity
  • Converting operational runbooks into goal-oriented prompts

Best practices

  • State Role + Objective + Latitude instead of enumerating steps
  • Give relevant context, not exhaustive constraints or warnings
  • Favor goals and outcomes; let the model choose methods
  • Allow recovery and adaptation — trust the LLM to generalize
  • Keep commands compact (one screen) with permission to act

Example use cases

  • Turn a 700-line procedural runbook into a short incident prompt for a senior SRE
  • Write a Claude Code command that lets an agent triage errors using available tools
  • Design an agent spec that prioritizes goals and self-correction over fixed action lists
  • Review a pull request instruction set and replace rigid checks with objectives and acceptance criteria
  • Create customer-facing prompts that give context and latitude for problem-solving

FAQ

What if the model makes a wrong choice when I give latitude?

Allow for corrective prompts and design the interaction to include checkpoints or verification steps rather than rigid pre-conditions.

How do I set latitude safely for sensitive tasks?

Define acceptable boundaries (e.g., "do not modify production databases") but avoid exhaustive negatives; prefer clear objectives and verification rules.