home / skills / nickcrew / claude-cortex / reasoning-controls
This skill helps you manage reasoning depth, budgets, and metrics visibility by guiding safe, consistent controls across tasks.
npx playbooks add skill nickcrew/claude-cortex --skill reasoning-controlsReview the files below or copy the command above to add this skill to your agents.
---
name: reasoning-controls
description: Use when adjusting reasoning depth, budgets, or metrics visibility - provides guidance for selecting and applying reasoning controls safely.
---
# Reasoning Controls
## Overview
Control reasoning depth and cost trade-offs using consistent settings and metrics.
## When to Use
- Adjusting reasoning depth or thinking mode
- Setting budget limits for cost or latency
- Reporting reasoning metrics
Avoid when:
- The task doesn’t require explicit reasoning controls
## Quick Reference
| Task | Load reference |
| --- | --- |
| Adjust reasoning | `skills/reasoning-controls/references/adjust.md` |
| Budget controls | `skills/reasoning-controls/references/budget.md` |
| Metrics reporting | `skills/reasoning-controls/references/metrics.md` |
## Workflow
1. Determine the control goal (depth, budget, metrics).
2. Load the matching reference.
3. Apply the control with the appropriate parameters.
4. Report settings and effects.
## Output
- Updated reasoning settings
- Metrics or confirmation output
## Common Mistakes
- Over-allocating budget for simple tasks
- Changing depth without explaining trade-offs
This skill helps teams control reasoning depth, cost budgets, and metrics visibility when running AI agents. It provides clear guidance for selecting controls, applying them consistently, and reporting their effects. Use it to balance accuracy, latency, and cost across workflows.
You pick a control goal—depth, budget, or metrics—then load the corresponding guidance and parameter presets. Apply the control by setting reasoning depth, inference budget, or metrics collection options, and run the agent with those parameters. The skill outputs updated settings and measurable metrics or confirmations so you can evaluate impact.
How do I choose an initial reasoning depth?
Begin with the minimum depth that produces acceptable results on a validation set, then increase incrementally while tracking accuracy and cost.
What metrics should I track?
Track accuracy or task-specific quality, latency, compute or token usage, and cost per run to evaluate trade-offs.