home / skills / athola / claude-night-market / context-optimization

context-optimization skill

/plugins/conserve/skills/context-optimization

This skill proactively analyzes context usage to optimize multi-step tasks and prevent context pressure from hindering progress.

npx playbooks add skill athola/claude-night-market --skill context-optimization

Review the files below or copy the command above to add this skill to your agents.

Files (6)
SKILL.md
4.6 KB
---
name: context-optimization
description: 'Use this skill BEFORE starting complex tasks. Check context levels proactively.
  Use when context usage approaches 50% of window, tasks need decomposition, complex
  multi-step operations planned, context pressure is high. Do not use when simple
  single-step tasks with low context usage. DO NOT use when: already using mcp-code-execution
  for tool chains.'
category: conservation
token_budget: 150
progressive_loading: true
hooks:
  PreToolUse:
  - matcher: Read
    command: 'echo "[skill:context-optimization] πŸ“Š Context analysis started: $(date)"
      >> ${CLAUDE_CODE_TMPDIR:-/tmp}/skill-audit.log

      '
    once: true
  PostToolUse:
  - matcher: Bash
    command: "# Track context analysis tools\nif echo \"$CLAUDE_TOOL_INPUT\" | grep\
      \ -qE \"(wc|tokei|cloc|context)\"; then\n  echo \"[skill:context-optimization]\
      \ Context measurement executed: $(date)\" >> ${CLAUDE_CODE_TMPDIR:-/tmp}/skill-audit.log\n\
      fi\n"
  Stop:
  - command: 'echo "[skill:context-optimization] === Optimization completed at $(date)
      ===" >> ${CLAUDE_CODE_TMPDIR:-/tmp}/skill-audit.log

      # Could export: context pressure events over time

      '
---
## Table of Contents

- [Quick Start](#quick-start)
- [When to Use](#when-to-use)
- [Core Hub Responsibilities](#core-hub-responsibilities)
- [Module Selection Strategy](#module-selection-strategy)
- [Context Classification](#context-classification)
- [Integration Points](#integration-points)
- [Resources](#resources)


# Context Optimization Hub

## Quick Start

### Basic Usage
```bash
# Analyze current context usage
python -m conserve.context_analyzer
```

## When To Use

- **Threshold Alert**: When context usage approaches 50% of the window.
- **Complex Tasks**: For operations requiring multi-file analysis or long tool chains.

## When NOT To Use

- Simple single-step tasks with low context usage
- Already using mcp-code-execution for tool chains

## Core Hub Responsibilities

1. Assess context pressure and MECW compliance.
2. Route to appropriate specialized modules.
3. Coordinate subagent-based workflows.
4. Manage token budget allocation across modules.
5. Synthesize results from modular execution.

## Module Selection Strategy

```python
def select_optimal_modules(context_situation, task_complexity):
    if context_situation == "CRITICAL":
        return ['mecw-assessment', 'subagent-coordination']
    elif task_complexity == 'high':
        return ['mecw-principles', 'subagent-coordination']
    else:
        return ['mecw-assessment']
```

## Context Classification

| Utilization | Status | Action |
|-------------|--------|--------|
| < 30% | LOW | Continue normally |
| 30-50% | MODERATE | Monitor, apply principles |
| > 50% | CRITICAL | Immediate optimization required |

## Large Output Handling (Claude Code 2.1.2+)

**Behavior Change**: Large bash command and tool outputs are saved to disk instead of being truncated; file references are provided for access.

### Impact on Context Optimization

| Scenario | Before 2.1.2 | After 2.1.2 |
|----------|--------------|-------------|
| Large test output | Truncated, partial data | Full output via file reference |
| Verbose build logs | Lost after 30K chars | Complete, accessible on-demand |
| Context pressure | Less from truncation | Same - only loaded when read |

### Best Practices

- **Avoid pre-emptive reads**: Large outputs are referenced, not automatically loaded into context.
- **Read selectively**: Use `head`, `tail`, or `grep` on file references.
- **Leverage full data**: Quality gates can access complete test results via files.
- **Monitor growth**: File references are small, but reading the full files adds to context.

## Integration Points

- **Token Conservation**: Receives usage strategies, returns MECW-compliant optimizations.
- **CPU/GPU Performance**: Aligns context optimization with resource constraints.
- **MCP Code Execution**: Delegates complex patterns to specialized MCP modules.

## Resources

- **MECW Theory**: See `modules/mecw-principles.md` for core concepts and the 50% rule.
- **Context Analysis**: See `modules/mecw-assessment.md` for risk identification.
- **Workflow Delegation**: See `modules/subagent-coordination.md` for decomposition patterns.
- **Context Waiting**: See `modules/context-waiting.md` for deferred loading strategies.
## Troubleshooting

### Common Issues

If context usage remains high after optimization, check for large files that were read entirely rather than selectively. If MECW assessments fail, ensure that your environment provides accurate token count metadata. For permission errors when writing output logs to `/tmp`, verify that the project's temporary directory is writable.

Overview

This skill helps you proactively evaluate and optimize conversational context before starting complex tasks. It detects rising context pressure, recommends module routing, and enforces token-conservation principles so multi-step workflows run reliably within window limits. Use it as a preflight check to avoid mid-task context failures.

How this skill works

The skill inspects current token/utilization metrics and classifies context pressure into LOW, MODERATE, or CRITICAL. It recommends or automatically selects specialized modules (MECW assessment, subagent coordination, deferred loading) and returns a concrete plan for token budgeting and selective data loading. For large outputs it marks file references and advises selective reads rather than loading full content.

When to use it

  • When context usage approaches or exceeds ~50% of the window
  • Before planning complex multi-file analysis or long tool chains
  • When tasks require decomposition into coordinated subagents
  • When token budgets must be allocated across modules
  • Not for simple, single-step tasks with low context usage

Best practices

  • Monitor token utilization continuously and trigger the hub at ~30–50% to avoid last-minute crises
  • Prefer file references for large outputs; read head/tail or grep instead of loading entire files
  • Route CRITICAL cases to MECW assessment and subagent coordination for immediate optimization
  • Allocate token budgets per module up front and enforce soft limits during execution
  • Avoid duplicative reads: cache derived summaries rather than reloading raw large outputs

Example use cases

  • Preflight before running a multi-stage CI test suite that produces verbose logs
  • Decomposing a code review across files and assigning subagents to each module with token caps
  • Handling build or test outputs that exceed in-memory context by saving to files and selectively reading results
  • Coordinating a long tool chain where CPU/GPU constraints and context limits both matter

FAQ

What triggers a CRITICAL classification?

When measured context utilization exceeds the configured critical threshold (commonly >50%), or when projected task expansion would breach the window.

How does it handle large command outputs?

Large outputs are saved to disk and returned as file references; the hub recommends selective reads (head/tail/grep) to limit added context.