home / skills / nickcrew / claude-cortex / reasoning-controls

reasoning-controls skill

/skills/reasoning-controls

This skill helps you manage reasoning depth, budgets, and metrics visibility by guiding safe, consistent controls across tasks.

npx playbooks add skill nickcrew/claude-cortex --skill reasoning-controls

Review the files below or copy the command above to add this skill to your agents.

Files (4)
SKILL.md
1.1 KB
---
name: reasoning-controls
description: Use when adjusting reasoning depth, budgets, or metrics visibility - provides guidance for selecting and applying reasoning controls safely.
---

# Reasoning Controls

## Overview
Control reasoning depth and cost trade-offs using consistent settings and metrics.

## When to Use
- Adjusting reasoning depth or thinking mode
- Setting budget limits for cost or latency
- Reporting reasoning metrics

Avoid when:
- The task doesn’t require explicit reasoning controls

## Quick Reference

| Task | Load reference |
| --- | --- |
| Adjust reasoning | `skills/reasoning-controls/references/adjust.md` |
| Budget controls | `skills/reasoning-controls/references/budget.md` |
| Metrics reporting | `skills/reasoning-controls/references/metrics.md` |

## Workflow
1. Determine the control goal (depth, budget, metrics).
2. Load the matching reference.
3. Apply the control with the appropriate parameters.
4. Report settings and effects.

## Output
- Updated reasoning settings
- Metrics or confirmation output

## Common Mistakes
- Over-allocating budget for simple tasks
- Changing depth without explaining trade-offs

Overview

This skill helps teams control reasoning depth, cost budgets, and metrics visibility when running AI agents. It provides clear guidance for selecting controls, applying them consistently, and reporting their effects. Use it to balance accuracy, latency, and cost across workflows.

How this skill works

You pick a control goal—depth, budget, or metrics—then load the corresponding guidance and parameter presets. Apply the control by setting reasoning depth, inference budget, or metrics collection options, and run the agent with those parameters. The skill outputs updated settings and measurable metrics or confirmations so you can evaluate impact.

When to use it

  • Tuning how deeply an agent reasons on a problem (shallow vs. deep chains of thought).
  • Enforcing cost or latency limits for production workloads or demos.
  • Making reasoning trade-offs explicit for stakeholders or auditors.
  • Collecting and reporting metrics to compare strategy performance.
  • Automating budget enforcement across agent runs.

Best practices

  • Start with conservative depth and budgets, then increase only if metrics show benefit.
  • Document trade-offs when changing depth or budget so reviewers understand the impact.
  • Prefer metrics-driven decisions: validate deeper reasoning by measuring performance improvement.
  • Use short confirmation runs to estimate cost/latency before full-scale execution.
  • Avoid over-allocating resources for simple, well-defined tasks.

Example use cases

  • Set a low-reasoning-depth preset for high-throughput, latency-sensitive services.
  • Limit token or compute budgets for a periodic batch job to control cloud spend.
  • Enable fine-grained metrics collection when testing a new reasoning strategy.
  • Switch to deeper reasoning for complex investigations and record before/after metrics.
  • Apply budget caps to prevent runaway cost during exploratory development.

FAQ

How do I choose an initial reasoning depth?

Begin with the minimum depth that produces acceptable results on a validation set, then increase incrementally while tracking accuracy and cost.

What metrics should I track?

Track accuracy or task-specific quality, latency, compute or token usage, and cost per run to evaluate trade-offs.