home / skills / mcouthon / agents / deep-research

This skill performs exhaustive research with full citations and structured findings to support critical decisions.

npx playbooks add skill mcouthon/agents --skill deep-research

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
5.1 KB
---
name: deep-research
description: "Exhaustive investigation with citations and structured findings. Use when thorough coverage is needed, all sources must be cited, or research will inform critical decisions. Triggers on: 'use deep-research mode', 'deep research', 'exhaustive investigation', 'thorough research', 'cite all sources', 'comprehensive analysis', 'leave no stone unturned', 'research everything'. Read-only mode - investigates and documents but doesn't modify code."
context: fork
allowed-tools: [Read, Grep, Glob, WebFetch, WebSearch, LSP]
---

# Deep-Research Mode

Exhaustive investigation with full citations and structured findings.

## Core Philosophy

> "Thorough beats fast. Citations beat claims. Structured beats stream-of-consciousness."

This mode is for when surface-level understanding isn't enough. You're building a complete, citable reference that others can verify.

## When to Use

- Research will inform critical decisions
- Findings need to be verifiable by others
- Coverage must be exhaustive (no gaps allowed)
- Multiple stakeholders need to review the research
- Building documentation that will outlive the session

## Output Structure

Every deep-research output must include:

### 1. Executive Summary

2-3 sentences covering:

- What was investigated
- Key finding (one sentence)
- Confidence level (High/Medium/Low)

### 2. Scope Definition

| Included              | Excluded                         |
| --------------------- | -------------------------------- |
| [What was researched] | [What was intentionally skipped] |

### 3. Findings

Each finding must have:

```markdown
#### Finding: [Title]

**Confidence:** High | Medium | Low

**Evidence:**

- [file.py#L42](file.py#L42) - [what this shows]
- [config.yaml#L15](config.yaml#L15) - [what this shows]

**Analysis:**
[Interpretation of the evidence]

**Implications:**
[What this means for the task at hand]
```

### 4. Coverage Report

| Area          | Files Checked | Confidence       |
| ------------- | ------------- | ---------------- |
| [Component A] | 12            | High             |
| [Component B] | 5             | Medium           |
| [Component C] | 0             | Not investigated |

### 5. Open Questions

- [ ] [Question that couldn't be answered with available information]
- [ ] [Area that needs human clarification]

## Research Techniques

### Breadth-First Scan

Before going deep, establish the landscape:

1. **File search** - Find all files matching patterns
2. **Grep for patterns** - Key terms, class names, function names
3. **Directory structure** - Understand organization
4. **Entry points** - Main files, index files, configs

### Depth-First Trace

For each important area:

1. **Start at entry point** - Where execution begins
2. **Follow all branches** - Don't skip conditionals
3. **Document dependencies** - What does this call/import?
4. **Note side effects** - File writes, API calls, state changes

### Cross-Reference

Connect findings across areas:

- Same pattern used differently in different places?
- Inconsistencies between documentation and code?
- Dead code paths?
- Hidden coupling between components?

## Citation Standards

### Always Cite

- Specific line numbers when referencing code
- File paths for configuration claims
- Test names when citing expected behavior
- Commit hashes for historical claims (if relevant)

### Citation Format

```markdown
[path/to/file.py#L42-L50](path/to/file.py#L42-L50) - Description
```

### Confidence Levels

| Level  | Meaning                               | Citation Requirement          |
| ------ | ------------------------------------- | ----------------------------- |
| High   | Verified in code, tests pass          | Direct code citation          |
| Medium | Inferred from patterns                | Multiple supporting citations |
| Low    | Speculation based on naming/structure | Clearly marked as inference   |

## Quality Checklist

Before completing research:

- [ ] All claims have citations
- [ ] Coverage report shows no critical gaps
- [ ] Confidence levels are assigned to each finding
- [ ] Open questions are explicitly listed
- [ ] Executive summary captures the essence
- [ ] Another agent could verify findings from citations

## Anti-Patterns

| ❌ Don't                     | ✅ Do                                                                   |
| ---------------------------- | ----------------------------------------------------------------------- |
| "The codebase uses React"    | "[package.json#L15](package.json#L15) lists [email protected] as dependency" |
| "This probably handles auth" | "Auth handling uncertain - no direct evidence found (Low confidence)"   |
| "I looked at the files"      | "Examined 23 files in src/services/, found 4 relevant"                  |
| "Everything seems fine"      | "No issues found in [scope]. Coverage: [X] files, [Y] functions"        |

## Integration with Explore Agent

When spawned as a subagent from Explore:

1. Receive the investigation topic from parent
2. Perform exhaustive research using techniques above
3. Return structured findings in the output format
4. Parent agent incorporates summary, not full investigation trace

Overview

This skill performs exhaustive, citation-driven investigations and returns structured, verifiable research reports. It is read-only: it inspects files, configs, and tests and documents findings with precise citations and confidence levels. Use it when decisions depend on complete, auditable evidence rather than quick summaries.

How this skill works

The skill scans the repository breadth-first to map files, key terms, and entry points, then traces important code paths depth-first to collect direct evidence. Every claim is tied to file paths and line numbers; findings include confidence ratings, analysis, and implications. Outputs follow a fixed structure: executive summary, scope, findings with citations, coverage report, and open questions.

When to use it

  • Critical decisions require provable evidence
  • Delivering documentation or reports that must be auditable
  • You need exhaustive coverage with no gaps
  • Multiple stakeholders will review or rely on the research
  • Tests, configs, or code must be cross-validated before changes

Best practices

  • Define scope clearly before scanning to avoid wasted effort
  • Prefer direct code citations over inference; mark inferences as Low confidence
  • Use breadth-first scan to locate relevant areas, then depth-first trace for each area
  • Document side effects, external calls, and test cases with exact file#L ranges
  • List open questions and coverage gaps so reviewers can prioritize follow-up

Example use cases

  • Validate security-sensitive behavior before deployment with line-cited evidence
  • Produce a compliance-ready report showing where features are implemented and tested
  • Investigate a regression by tracing entry points and dependencies with citations
  • Audit third-party integration code and configuration for unexpected side effects
  • Create long-lived documentation that others can verify from cited files and tests

FAQ

Will the skill modify code or create commits?

No. The skill operates in read-only mode and only inspects and documents. It does not change files or create commits.

What citation format does the skill produce?

Citations use file paths and line ranges (e.g., path/to/file.py#L42-L50) for direct evidence. Each finding includes those citations and a stated confidence level.