home / skills / oimiragieo / agent-studio / context-compressor

context-compressor skill

/.claude/skills/context-compressor

This skill compresses and summarizes conversations, code, and docs to preserve decisions while reducing token usage for efficient context.

npx playbooks add skill oimiragieo/agent-studio --skill context-compressor

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
5.1 KB
---
name: context-compressor
description: Context compression and summarization methodology. Techniques for reducing token usage while preserving decision-critical information.
version: 1.0
model: sonnet
invoked_by: both
user_invocable: true
tools: [Read, Write]
best_practices:
  - Preserve decision-critical information
  - Remove redundant content
  - Use structured formats
  - Maintain traceability
error_handling: graceful
streaming: supported
---

# Context Compressor Skill

<identity>
Context Compressor Skill - Techniques for reducing token usage while preserving decision-critical information. Helps agents work efficiently within context limits.
</identity>

<capabilities>
- Compressing conversation history
- Summarizing code and documentation
- Extracting key decisions and context
- Creating efficient memory snapshots
- Reducing redundancy in context
</capabilities>

<instructions>
<execution_process>

### Step 1: Identify Compressible Content

Content types that can be compressed:

| Type          | Compression Strategy                         |
| ------------- | -------------------------------------------- |
| Code          | Keep signatures, summarize implementations   |
| Conversations | Extract decisions, drop small talk           |
| Documentation | Keep headings and key points                 |
| Errors        | Keep message and location, drop stack frames |
| Logs          | Keep patterns, drop repetitions              |

### Step 2: Apply Compression Techniques

**Technique 1: Decision Extraction**

Before:

```
User: Should we use Redis or Memcached?
Assistant: Let me analyze both options...
[500 words of analysis]
Recommendation: Redis for pub/sub support.
User: Ok let's use Redis.
```

After:

```
Decision: Use Redis (chosen for pub/sub support)
```

**Technique 2: Code Summarization**

Before:

```javascript
// 100 lines of UserService implementation
```

After:

```
UserService: CRUD operations for users
- Methods: create, read, update, delete, findByEmail
- Dependencies: db, validator, logger
- Location: src/services/user.js
```

**Technique 3: Error Compression**

Before:

```
Error: Cannot read property 'id' of undefined
    at UserController.getUser (src/controllers/user.js:45:23)
    at Layer.handle [as handle_request] (node_modules/express/lib/router/layer.js:95:5)
    ... 20 more stack frames
```

After:

```
Error: Cannot read 'id' of undefined @ src/controllers/user.js:45
Cause: User object is null when accessing .id
```

### Step 3: Structure Compressed Output

Use consistent formats:

```markdown
## Session Summary

### Decisions Made

- [D1] Use Redis for caching
- [D2] JWT for authentication

### Files Modified

- src/auth/jwt.js (new)
- src/config/redis.js (updated)

### Open Items

- [ ] Add rate limiting
- [ ] Write tests for JWT
```

### Step 4: Validate Compression

Ensure critical info preserved:

- [ ] All decisions captured
- [ ] Key file locations retained
- [ ] Error causes documented
- [ ] Next steps clear

</execution_process>

<best_practices>

1. **Preserve Decisions**: Never lose decision rationale
2. **Keep Locations**: File paths and line numbers are critical
3. **Summarize, Don't Delete**: Transform verbose content
4. **Use References**: Point to files instead of including content
5. **Test Recovery**: Can you continue work from compressed context?

</best_practices>
</instructions>

<examples>
<usage_example>
**Compression Request**:

```
Compress the last 10 messages for context handoff
```

**Example Response**:

```markdown
## Context Summary (compressed from 10 messages)

### Task

Implementing user authentication with JWT

### Progress

1. Created JWT utility (src/auth/jwt.js)
2. Added login endpoint (src/routes/auth.js)
3. Tests failing - mock issue

### Key Decisions

- Using RS256 for JWT signing (security over performance)
- 1-hour token expiry with refresh tokens

### Current Issue

Jest mock for jwt.sign not working
Location: src/auth/**tests**/jwt.test.js:23

### Next Steps

1. Fix JWT mock
2. Add refresh token endpoint
```

</usage_example>
</examples>

## Rules

- Never lose decision rationale
- Always include file locations
- Test that work can continue from compressed context

## Workflow Integration

This skill supports multi-agent orchestration by enabling efficient context management:

**Router Decision:** `.claude/workflows/core/router-decision.md`

- Router spawns agents that use this skill for context-efficient handoffs
- Used in long-running sessions to maintain continuity

**Artifact Lifecycle:** `.claude/workflows/core/skill-lifecycle.md`

- Compression patterns evolve with framework changes
- Session summaries feed into memory protocol

**Related Workflows:**

- `session-handoff` skill for complete handoff protocol
- `swarm-coordination` skill for multi-agent context sharing

---

## Memory Protocol (MANDATORY)

**Before starting:**

```bash
cat .claude/context/memory/learnings.md
```

**After completing:**

- New pattern -> `.claude/context/memory/learnings.md`
- Issue found -> `.claude/context/memory/issues.md`
- Decision made -> `.claude/context/memory/decisions.md`

> ASSUME INTERRUPTION: Your context may reset. If it's not in memory, it didn't happen.

Overview

This skill provides a practical methodology for compressing and summarizing context so agents can operate within token limits while preserving decision-critical information. It focuses on extracting decisions, summarizing code and docs, and creating compact memory snapshots that enable uninterrupted handoffs. The approach is actionable and format-driven to ensure compressed output remains usable for continued work.

How this skill works

The skill inspects conversation history, code, logs, errors, and documentation to identify compressible content and key decision points. It applies targeted techniques: decision extraction, code summarization, error compression, and redundancy removal. Output is structured into consistent summaries (decisions, files, open items, next steps) and validated against a checklist to ensure no critical info is lost. File locations, decision rationale, and next actions are always retained to allow immediate resumption of work.

When to use it

  • Handoff between agents or teams where context length must be minimized
  • Preparing long conversation history for an agent with strict token limits
  • Saving compact session snapshots to memory or a knowledge base
  • Reducing redundancy in logs, errors, or documentation before analysis
  • Summarizing large code files while preserving APIs and dependencies

Best practices

  • Always preserve decision rationale and mark decisions clearly
  • Keep file paths and important line references rather than full files
  • Summarize implementations by signatures, methods, dependencies, and location
  • Transform verbose content instead of deleting it; use references to originals
  • Validate compressed output with a checklist (decisions, files, causes, next steps)

Example use cases

  • Compress the last 10 messages into a 1-paragraph summary for a handoff
  • Summarize a 200-line service file into methods, dependencies, and path
  • Condense a stack trace into error message, likely cause, and file:line
  • Create a session snapshot listing decisions, modified files, and open items
  • Trim logs by extracting patterns and dropping repeated entries

FAQ

Will compression remove important context?

No. The method prioritizes decisions, file locations, and causes; verbose or repetitive details are summarized, not discarded.

How do I trust the compressed output is sufficient?

Use the validation checklist provided: confirm decisions captured, key file locations retained, error causes documented, and next steps listed.

Can I reconstruct full content from the summary?

Summaries include references to original files and locations so you can retrieve full content when needed; the summary itself is meant to enable immediate next actions.