home / skills / nickcrew / claude-cortex / tool-selection
This skill helps you select the optimal MCP tool by evaluating task complexity and performance trade-offs with a structured decision workflow.
npx playbooks add skill nickcrew/claude-cortex --skill tool-selectionReview the files below or copy the command above to add this skill to your agents.
---
name: tool-selection
description: Use when selecting between MCP tools based on task complexity and requirements - provides a structured selection workflow and decision rationale.
---
# Tool Selection
## Overview
Select the optimal MCP tool by evaluating task complexity, accuracy needs, and performance trade-offs.
## When to Use
- Choosing between Codanna and Morphllm
- Routing tasks based on complexity
- Explaining tool selection rationale
Avoid when:
- The tool is explicitly specified by the user
## Quick Reference
| Task | Load reference |
| --- | --- |
| Tool selection | `skills/tool-selection/references/select.md` |
## Workflow
1. Parse the operation requirements.
2. Load the tool selection reference.
3. Apply the scoring and decision matrix.
4. Report the chosen tool and rationale.
## Output
- Selected tool and confidence
- Rationale and trade-offs
## Common Mistakes
- Ignoring explicit user tool preferences
- Overweighting speed vs accuracy without justification
This skill helps pick the optimal MCP tool by evaluating task complexity, required accuracy, and performance trade-offs. It provides a structured workflow and a clear decision rationale so you can route tasks to the best tool confidently. The output includes a selected tool, confidence score, and trade-off explanation.
The skill parses operation requirements (latency, budget, accuracy, domain constraints) and loads a tool selection reference to establish baseline capabilities. It applies a scoring matrix that weights criteria according to priorities, compares candidate tools (for example Codanna vs Morphllm), and computes a recommendation with a confidence level. Finally, it generates a concise rationale describing chosen trade-offs and any assumptions made.
What if a user explicitly requests a specific tool?
Honor the explicit request and skip automatic selection; include the user preference in logs and rationale.
How is confidence calculated?
Confidence is derived from the normalized score difference between top candidates and the completeness of required capability matches.