home / skills / gopherguides / gopher-ai / second-opinion

This skill proactively requests a second opinion on architectural decisions, complex trade-offs, or security reviews to provide multiple perspectives.

npx playbooks add skill gopherguides/gopher-ai --skill second-opinion

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
4.8 KB
---
name: second-opinion
description: |
  WHEN: User faces complex architectural decisions, asks for "another perspective" or "second opinion",
  multiple valid approaches exist, reviewing critical/security-sensitive code, design trade-offs,
  or user says "sanity check", "what do you think", or asks about contentious patterns
  WHEN NOT: Simple questions, straightforward implementations, routine code changes,
  user has expressed strong preference, user explicitly declines other opinions
---

# Second Opinion Skill

Proactively suggest getting another LLM's perspective when the situation warrants it.

## Trigger Conditions

Suggest a second opinion when you detect:

### 1. Architectural Decisions
- Choosing between design patterns (e.g., repository vs service layer)
- Database schema design decisions
- API design choices (REST vs GraphQL, versioning strategy)
- Service decomposition (monolith vs microservices)
- State management approaches

### 2. Complex Trade-offs
- Performance vs. readability
- Flexibility vs. simplicity
- DRY vs. explicit code
- Build vs. buy decisions
- Consistency vs. availability trade-offs

### 3. Critical Code Reviews
- Security-sensitive code (authentication, authorization, crypto)
- Performance-critical paths
- Complex algorithms or data structures
- Code handling financial transactions or PII
- Concurrency and threading logic

### 4. Explicit Requests (trigger words)
- "another perspective"
- "second opinion"
- "sanity check"
- "what do you think"
- "am I on the right track"
- "does this make sense"
- "is this a good approach"

## How to Suggest

When conditions are met, offer specific options:

> This involves [type of decision]. Would you like a second opinion from another LLM?
>
> - `/codex review` - Get OpenAI's analysis
> - `/gemini <specific question>` - Ask Google Gemini
> - `/ollama <question>` - Use a local model (keeps data private)
> - `/llm-compare <question>` - Compare multiple models

**Tailor the suggestion to the context:**

For security-sensitive code:
> Since this involves authentication logic, you might want a second security review. Try `/codex review` or `/ollama` (keeps code local) for another perspective.

For architectural decisions:
> This is a significant architectural choice. Different models sometimes weigh trade-offs differently. Want to try `/llm-compare "should I use X or Y for this use case"` to see multiple perspectives?

For complex algorithms:
> This algorithm has some complexity. A second set of eyes might catch edge cases. Try `/codex explain the edge cases in this algorithm`.

## When NOT to Suggest

Do not suggest second opinions when:
- User is actively implementing (don't interrupt flow)
- Task is simple/straightforward (typos, formatting, simple fixes)
- User has already made a firm decision
- User said "just do it" or similar
- It's routine code changes with clear requirements
- User previously declined suggestions in this session

## Privacy Consideration

Always mention `/ollama` as an option when the code might be sensitive:

> For proprietary code, `/ollama` keeps everything local - your code never leaves your machine.

## Example Interactions

**Example 1: Architectural Decision**

User: "Should I use a monorepo or multi-repo for these microservices?"

Suggest:
> Both approaches have valid trade-offs that depend on your team size and deployment needs. For a decision this impactful, would you like another perspective?
>
> - `/codex explain trade-offs between monorepo and multi-repo`
> - `/llm-compare` to see how different models weigh the factors

**Example 2: Security Code Review**

User: "Can you review this JWT validation code?"

After providing your review, suggest:
> Since this is security-critical authentication code, you might want a second review:
>
> - `/codex review` for OpenAI's security analysis
> - `/ollama review this JWT implementation` (keeps code private)

**Example 3: Complex Algorithm**

User: "Does this concurrent queue implementation look correct?"

After analysis, suggest:
> Concurrent code can have subtle race conditions. Want a second opinion?
>
> - `/codex analyze race conditions in this code`
> - `/gemini explain potential deadlocks here`

**Example 4: Design Pattern Choice**

User: "Should I use the factory pattern or builder pattern here?"

Suggest:
> Both patterns could work here with different trade-offs. Want to see how other models approach this?
>
> - `/llm-compare "factory vs builder pattern for <context>"` for multiple perspectives

## Integration with Other Commands

When suggesting, be specific about which command fits best:

| Situation | Best Command |
|-----------|--------------|
| Code review | `/codex review` |
| Quick question | `/gemini <question>` |
| Sensitive/private code | `/ollama <question>` |
| Want multiple views | `/llm-compare <question>` |
| Complex reasoning task | `/codex` or `/ollama` with larger models |

Overview

This skill proactively suggests getting another LLM’s perspective when a user faces non-trivial design, security, or trade-off decisions. It identifies situations where a second opinion can reduce risk and surface alternative approaches. Suggestions are tailored to the context and include options for public or private/model-comparison reviews.

How this skill works

The skill scans the user query for trigger signals like architectural decisions, complex trade-offs, or security-critical code, and for explicit phrases such as “second opinion,” “sanity check,” or “what do you think.” When triggered, it offers targeted follow-up commands (e.g., codex, gemini, ollama, llm-compare) and recommends the best command based on privacy and intent. It avoids interrupting users during simple tasks, active implementation, or when the user has firmly decided.

When to use it

  • Choosing between major architectural patterns or service decompositions
  • Evaluating complex trade-offs (performance vs readability, flexibility vs simplicity)
  • Reviewing security-sensitive or privacy-related code (auth, crypto, PII handling)
  • Inspecting performance-critical or concurrency-heavy implementations
  • When the user explicitly asks for another perspective, sanity check, or second opinion

Best practices

  • Be specific when asking for a second opinion — include goals, constraints, and context
  • Use /ollama for proprietary or sensitive code to keep data local
  • Use /llm-compare to surface differing model perspectives on ambiguous design choices
  • Prefer targeted commands: /codex for deep analysis, /gemini for quick alternate takes
  • Respect the user’s flow: don’t suggest when they’re actively implementing or have declined extra opinions

Example use cases

  • Debating monorepo vs multi-repo for a microservices landscape and wanting model comparisons
  • Reviewing JWT validation or authentication logic and requesting an extra security-focused review
  • Assessing a concurrency algorithm for subtle race conditions and asking for edge-case analysis
  • Deciding between repository vs service layer patterns and seeking varied trade-offs from multiple models
  • Choosing between REST and GraphQL for a public API and wanting different model viewpoints

FAQ

What command should I use for sensitive code?

Use /ollama to keep code local and avoid sending proprietary data to remote services.

When will the skill not suggest a second opinion?

It won’t suggest one for trivial fixes, when you’re actively implementing, or if you explicitly decline additional opinions.