home / skills / gopherguides / gopher-ai / second-opinion
This skill proactively requests a second opinion on architectural decisions, complex trade-offs, or security reviews to provide multiple perspectives.
npx playbooks add skill gopherguides/gopher-ai --skill second-opinionReview the files below or copy the command above to add this skill to your agents.
---
name: second-opinion
description: |
WHEN: User faces complex architectural decisions, asks for "another perspective" or "second opinion",
multiple valid approaches exist, reviewing critical/security-sensitive code, design trade-offs,
or user says "sanity check", "what do you think", or asks about contentious patterns
WHEN NOT: Simple questions, straightforward implementations, routine code changes,
user has expressed strong preference, user explicitly declines other opinions
---
# Second Opinion Skill
Proactively suggest getting another LLM's perspective when the situation warrants it.
## Trigger Conditions
Suggest a second opinion when you detect:
### 1. Architectural Decisions
- Choosing between design patterns (e.g., repository vs service layer)
- Database schema design decisions
- API design choices (REST vs GraphQL, versioning strategy)
- Service decomposition (monolith vs microservices)
- State management approaches
### 2. Complex Trade-offs
- Performance vs. readability
- Flexibility vs. simplicity
- DRY vs. explicit code
- Build vs. buy decisions
- Consistency vs. availability trade-offs
### 3. Critical Code Reviews
- Security-sensitive code (authentication, authorization, crypto)
- Performance-critical paths
- Complex algorithms or data structures
- Code handling financial transactions or PII
- Concurrency and threading logic
### 4. Explicit Requests (trigger words)
- "another perspective"
- "second opinion"
- "sanity check"
- "what do you think"
- "am I on the right track"
- "does this make sense"
- "is this a good approach"
## How to Suggest
When conditions are met, offer specific options:
> This involves [type of decision]. Would you like a second opinion from another LLM?
>
> - `/codex review` - Get OpenAI's analysis
> - `/gemini <specific question>` - Ask Google Gemini
> - `/ollama <question>` - Use a local model (keeps data private)
> - `/llm-compare <question>` - Compare multiple models
**Tailor the suggestion to the context:**
For security-sensitive code:
> Since this involves authentication logic, you might want a second security review. Try `/codex review` or `/ollama` (keeps code local) for another perspective.
For architectural decisions:
> This is a significant architectural choice. Different models sometimes weigh trade-offs differently. Want to try `/llm-compare "should I use X or Y for this use case"` to see multiple perspectives?
For complex algorithms:
> This algorithm has some complexity. A second set of eyes might catch edge cases. Try `/codex explain the edge cases in this algorithm`.
## When NOT to Suggest
Do not suggest second opinions when:
- User is actively implementing (don't interrupt flow)
- Task is simple/straightforward (typos, formatting, simple fixes)
- User has already made a firm decision
- User said "just do it" or similar
- It's routine code changes with clear requirements
- User previously declined suggestions in this session
## Privacy Consideration
Always mention `/ollama` as an option when the code might be sensitive:
> For proprietary code, `/ollama` keeps everything local - your code never leaves your machine.
## Example Interactions
**Example 1: Architectural Decision**
User: "Should I use a monorepo or multi-repo for these microservices?"
Suggest:
> Both approaches have valid trade-offs that depend on your team size and deployment needs. For a decision this impactful, would you like another perspective?
>
> - `/codex explain trade-offs between monorepo and multi-repo`
> - `/llm-compare` to see how different models weigh the factors
**Example 2: Security Code Review**
User: "Can you review this JWT validation code?"
After providing your review, suggest:
> Since this is security-critical authentication code, you might want a second review:
>
> - `/codex review` for OpenAI's security analysis
> - `/ollama review this JWT implementation` (keeps code private)
**Example 3: Complex Algorithm**
User: "Does this concurrent queue implementation look correct?"
After analysis, suggest:
> Concurrent code can have subtle race conditions. Want a second opinion?
>
> - `/codex analyze race conditions in this code`
> - `/gemini explain potential deadlocks here`
**Example 4: Design Pattern Choice**
User: "Should I use the factory pattern or builder pattern here?"
Suggest:
> Both patterns could work here with different trade-offs. Want to see how other models approach this?
>
> - `/llm-compare "factory vs builder pattern for <context>"` for multiple perspectives
## Integration with Other Commands
When suggesting, be specific about which command fits best:
| Situation | Best Command |
|-----------|--------------|
| Code review | `/codex review` |
| Quick question | `/gemini <question>` |
| Sensitive/private code | `/ollama <question>` |
| Want multiple views | `/llm-compare <question>` |
| Complex reasoning task | `/codex` or `/ollama` with larger models |
This skill proactively suggests getting another LLM’s perspective when a user faces non-trivial design, security, or trade-off decisions. It identifies situations where a second opinion can reduce risk and surface alternative approaches. Suggestions are tailored to the context and include options for public or private/model-comparison reviews.
The skill scans the user query for trigger signals like architectural decisions, complex trade-offs, or security-critical code, and for explicit phrases such as “second opinion,” “sanity check,” or “what do you think.” When triggered, it offers targeted follow-up commands (e.g., codex, gemini, ollama, llm-compare) and recommends the best command based on privacy and intent. It avoids interrupting users during simple tasks, active implementation, or when the user has firmly decided.
What command should I use for sensitive code?
Use /ollama to keep code local and avoid sending proprietary data to remote services.
When will the skill not suggest a second opinion?
It won’t suggest one for trivial fixes, when you’re actively implementing, or if you explicitly decline additional opinions.