home / skills / nickcrew / claude-cortex / multi-llm-consult

multi-llm-consult skill

/skills/multi-llm-consult

This skill consults external LLMs to provide second opinions, plan comparisons, and delegated reviews, then consolidates results for informed decisions.

npx playbooks add skill nickcrew/claude-cortex --skill multi-llm-consult

Review the files below or copy the command above to add this skill to your agents.

Files (3)
SKILL.md
1.9 KB
---
name: multi-llm-consult
description: Consult external LLMs (Gemini, OpenAI/Codex, Qwen) for second opinions, alternative plans, independent reviews, or delegated tasks. Use when a user asks for another model's perspective, wants to compare answers, or requests delegating a subtask to Gemini/Codex/Qwen.
---

# Multi-LLM Consult

## Overview

Use a bundled script to query external LLM providers with a sanitized prompt and return a concise comparison.

## Setup

- Configure API keys in the TUI: open the Command Palette (Ctrl+P) and run **Configure LLM Providers**.
- Keys are stored in `settings.json` under `llm_providers`.

## Workflow

1. Identify the **purpose** (`second-opinion`, `plan`, `review`, `delegate`).
2. Summarize the task and **sanitize sensitive data** before sending it out.
3. Run the consult script with the chosen provider.
4. Compare responses and reconcile with your own plan before acting.

## Consult Script

Always run `--help` first:

```bash
python scripts/consult_llm.py --help
```

Example: second opinion

```bash
python scripts/consult_llm.py \
  --provider gemini \
  --purpose second-opinion \
  --prompt "We plan to refactor module X. What risks or gaps do you see?"
```

Example: delegate a review

```bash
python scripts/consult_llm.py \
  --provider qwen \
  --purpose review \
  --prompt-file /tmp/review_request.md \
  --context-file /tmp/patch.diff
```

Example: plan check with Codex (OpenAI)

```bash
python scripts/consult_llm.py \
  --provider codex \
  --purpose plan \
  --prompt "Draft a 5-step plan for implementing feature Y."
```

## Output Handling

- Treat responses as advisory; verify against repo constraints and current state.
- Summarize the external response in 3-6 bullets before acting.
- If responses conflict, call out the differences explicitly and choose a path.

## References

- Provider defaults and configuration: `references/providers.md`

Overview

This skill consults external LLMs (Gemini, OpenAI/Codex, Qwen) to get second opinions, alternative plans, independent reviews, or to delegate discrete tasks. It runs a sanitized prompt against a chosen provider and returns concise, comparable outputs to help you decide or act.

How this skill works

You pick a purpose (second-opinion, plan, review, delegate), sanitize any sensitive data, and call the consult script with the provider and prompt or files. The script queries the external model, captures its response, and produces a short comparison and summary for easy reconciliation with your primary plan. Responses are advisory and require verification against your repository and constraints.

When to use it

  • You need an independent second opinion on a design, PR, or plan.
  • You want alternative implementation plans or step-by-step proposals.
  • You need an automated review of a patch, diff, or proposal.
  • You want to delegate a well-scoped subtask (drafting, tests, docs).
  • You need to compare outputs from multiple LLM providers before deciding.

Best practices

  • Sanitize prompts to remove secrets, tokens, or private data before sending.
  • Choose purpose explicitly (second-opinion, plan, review, delegate) to frame the model’s response.
  • Run the script with --help first to confirm available flags and provider names.
  • Summarize external responses in 3–6 bullets and call out conflicts or gaps.
  • Verify any suggested code or changes against the current repo and CI before merging.

Example use cases

  • Ask Gemini for a second opinion on risks and gaps before refactoring module X.
  • Request a 5-step implementation plan from Codex (OpenAI) for a new feature.
  • Send a patch.diff and prompt Qwen to perform an independent code review.
  • Delegate documentation drafting for a module to an external model, then edit the result.
  • Compare recommendations from two providers to resolve conflicting suggestions.

FAQ

How do I configure API keys?

Open the TUI command palette (Ctrl+P), run Configure LLM Providers, and add keys; they are stored in settings.json under llm_providers.

What should I do if providers give conflicting advice?

Summarize each response point-by-point, highlight conflicts, weigh them against repo constraints and tests, then choose or synthesize a path.