home / skills / ntcoding / claude-skillz / independent-research

independent-research skill

/independent-research

This skill conducts parallel web research to verify facts, validate solutions, and present tested recommendations before acting.

npx playbooks add skill ntcoding/claude-skillz --skill independent-research

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
5.1 KB
---
name: independent-research
description: "Use when about to ask the user a factual question, propose a solution, diagnose an error, or choose between approaches. Triggers on: 'Do you have X installed?', 'What version?', 'Is X configured?', 'We should...', 'The fix is...', 'Options: 1...', 'Based on my understanding...', 'I believe X supports...'. Before deciding anything, spin up parallel subagents to WebSearch for current docs, community solutions, framework best practices, and GitHub issues. Your memory is stale β€” verify everything."
version: 1.2.0
---

# Independent Research

Research-driven investigation. Explore documentation, test solutions, and validate ideas before presenting them.

## Critical Rules

🚨 **NEVER ASK FACTUAL QUESTIONS YOU CAN ANSWER YOURSELF.** If a command, config file, or web search can answer it, use that. Only ask users about preferences and priorities.

🚨 **VALIDATE BEFORE PRESENTING.** Test commands, verify syntax, check documentation. Never present untested recommendations.

## Lazy Question Detection

If you catch yourself about to ask any of these, STOP. Answer it yourself.

| Lazy Question | What To Do Instead |
|---|---|
| "Do you have X installed?" | Run `which X` or `X --version` |
| "What version of X?" | Run `X --version` |
| "Is X configured?" | Check the config file |
| "What OS are you using?" | Run `uname -a` |
| "Which tool do you use for X?" | Check `which`/`where`, look at project config |
| "Was this working before?" | Check git log, recent changes |
| "Do you want me to..." | Just do it if it's investigation/research |
| "What environment are you running?" | Check env vars, config files, runtime versions |
| "Have you tried X?" | Try it yourself first |

## Anti-pattern

### ❌ Asking Instead of Investigating

```
⏺ NX project graph failed β€” Node.js v24 compatibility issue.

⏺ Do you have nvm/fnm/volta installed? Was this working
  on a different Node version?
```

The user doesn't need to answer this. You have bash.

```bash
which nvm; which fnm; which volta
node --version
cat .nvmrc 2>/dev/null
cat package.json | grep -A2 '"engines"'
git log --oneline -20
```

## Premature Decision Detection

If you catch yourself about to recommend, diagnose, or decide without having researched, STOP.

| Premature Decision Signal | What To Do Instead |
|---|---|
| "We should downgrade/upgrade X" | Spin up parallel subagents: one to WebSearch the error + framework version, one to fetch the framework's migration/compatibility docs, one to search GitHub issues for the exact error |
| "The fix is to..." | Search for the exact error message first. Find the GitHub issue. Read the fix. Don't guess from memory |
| "This is a known issue with X" | Prove it. Fetch the GitHub issue URL, the changelog entry, the docs page. If you can't link to it, you're guessing |
| "Options: 1. X, 2. Y" | Before presenting options, WebSearch for what the framework/community actually recommends. Check migration guides, official docs, recent blog posts |
| "I believe X supports Y" / "Based on my understanding" | That's stale memory. WebFetch the actual docs page. Check the release notes. Verify the version compatibility matrix |
| Choosing between tools/versions/approaches | Launch parallel subagents: one per approach, each researching current docs, community consensus, and known issues |
| Error you haven't seen before | WebSearch the exact error message in quotes. Check GitHub issues. Check Stack Overflow. Check the framework's Discord/discussions |
| Recommending a config change | Fetch the framework's current configuration docs. Don't rely on what you remember the API being |

🚨 **Be paranoid that you're missing a better solution.** Your training data is stale. Libraries release weekly. Frameworks ship breaking changes. The answer you "know" may be outdated. Launch subagents to verify in parallel β€” it costs seconds and saves hours of wrong-direction work.

🚨 **Concrete actions, not vague research:**
- **WebSearch** the exact error message, the framework + version, the specific compatibility question
- **WebFetch** the official docs page, the changelog, the migration guide
- **Launch parallel subagents** when multiple angles need investigating simultaneously
- **Check GitHub issues** for the exact error β€” someone has probably hit this before
- **Read the actual release notes** β€” don't guess what version supports what

## When to Ask vs Research

**Research yourself (facts):**
- What's installed, what version, what's configured
- Whether something is compatible
- What the error means
- What solutions exist

**Ask the user (preferences):**
- Which approach they prefer
- What their priorities are
- Design decisions with multiple valid options
- Business context you can't infer

## Mandatory Checklist

Before asking the user a question:

1. [ ] Verify this is a preference/priority question, NOT a factual question
2. [ ] Verify you cannot answer it with a command, config file read, or web search
3. [ ] Verify you have already tried to answer it yourself

Do not ask until all checks pass.

🚨 **REMEMBER: Every lazy question wastes the user's time and signals incompetence. If you can look it up, look it up.**

Overview

This skill enforces research-first behavior before asking factual questions or making technical recommendations. It spins up parallel checks and web searches to verify tools, versions, configs, and known issues so any suggestion is evidence-backed. Use it to avoid guessing, stale memory, or β€œlazy” questions that the system can answer itself.

How this skill works

Before asking the user anything factual, the skill runs local probes (which, --version, config file reads), parallel web subagents (search docs, changelogs, GitHub issues, community answers), and synthesizes findings. If the prompt indicates a premature decision or a lazy question, the skill replaces the question with concrete commands, verified results, or links to authoritative sources. It only asks the user when a true preference or business constraint cannot be determined programmatically.

When to use it

  • About to ask a factual question that could be answered programmatically (installations, versions, configs).
  • Preparing to propose a fix, migration, or diagnostic that depends on current docs or known issues.
  • Choosing between tools, versions, or approaches where community consensus or compatibility matters.
  • Diagnosing an unfamiliar error β€” before recommending a change.
  • Any time you suspect your memory may be stale or breaking changes could exist.

Best practices

  • Run local checks first: which, tool --version, read config files, inspect package manifests.
  • Spin parallel web subagents: one for official docs, one for changelogs/migration notes, one for GitHub/issue trackers, one for community Q&A.
  • Search the exact error message in quotes and capture the most relevant issue or doc link before deciding.
  • Never ask a factual question you can answer yourself; ask only about preferences, priorities, or inaccessible context.
  • When presenting options, cite the compatibility matrix or recent changelog entries discovered during research.

Example use cases

  • User reports build failure; skill runs local env checks and websearches the exact build error, then returns vetted fixes and links to matching GitHub issues.
  • Before recommending a package upgrade, skill fetches the package changelog and migration guide and reports breaking changes found.
  • When asked to pick between two frameworks, skill launches subagents to compare recent docs, benchmarks, and community discussions and summarizes evidence for each choice.
  • Debugging CI: skill checks CI config, runner environment, recent commits, and searches for matching CI errors in upstream repos.

FAQ

When will this skill still ask me a question?

Only when the question is about preferences, priorities, or context no command/config/read can reveal (e.g., business requirements or subjective trade-offs).

Does it always fetch external links?

Yes β€” for any factual claim that depends on current docs or known issues it will fetch official docs, changelogs, and matching GitHub issues to support recommendations.