home / skills / spring-ai-alibaba / examples / web-search

This skill conducts structured web research by planning, delegating to subagents, and synthesizing findings into a well-sourced report.

npx playbooks add skill spring-ai-alibaba/examples --skill web-search

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
4.2 KB
---
name: web-research
description: Use this skill for requests related to web research; it provides a structured approach to conducting comprehensive web research 
---

# Web Research Skill

This skill provides a structured approach to conducting comprehensive web research using the `task` tool to spawn research subagents. It emphasizes planning, efficient delegation, and systematic synthesis of findings.

## When to Use This Skill

Use this skill when you need to:
- Research complex topics requiring multiple information sources
- Gather and synthesize current information from the web
- Conduct comparative analysis across multiple subjects
- Produce well-sourced research reports with clear citations

## Research Process

### Step 1: Create and Save Research Plan

Before delegating to subagents, you MUST:

1. **Create a research folder** - Organize all research files in a dedicated folder relative to the current working directory:
   ```
   mkdir research_[topic_name]
   ```
   This keeps files organized and prevents clutter in the working directory.

2. **Analyze the research question** - Break it down into distinct, non-overlapping subtopics

3. **Write a research plan file** - Use the `write_file` tool to create `research_[topic_name]/research_plan.md` containing:
    - The main research question
    - 2-5 specific subtopics to investigate
    - Expected information from each subtopic
    - How results will be synthesized

**Planning Guidelines:**
- **Simple fact-finding**: 1-2 subtopics
- **Comparative analysis**: 1 subtopic per comparison element (max 3)
- **Complex investigations**: 3-5 subtopics

### Step 2: Delegate to Research Subagents

For each subtopic in your plan:

1. **Use the `task` tool** to spawn a research subagent with:
    - Clear, specific research question (no acronyms)
    - Instructions to write findings to a file: `research_[topic_name]/findings_[subtopic].md`
    - Budget: 3-5 web searches maximum

2. **Run up to 3 subagents in parallel** for efficient research

**Subagent Instructions Template:**
```
Research [SPECIFIC TOPIC]. Use the web_search tool to gather information.
After completing your research, use write_file to save your findings to research_[topic_name]/findings_[subtopic].md.
Include key facts, relevant quotes, and source URLs.
Use 3-5 web searches maximum.
```

### Step 3: Synthesize Findings

After all subagents complete:

1. **Review the findings files** that were saved locally:
    - First run `list_files research_[topic_name]` to see what files were created
    - Then use `read_file` with the **file paths** (e.g., `research_[topic_name]/findings_*.md`)
    - **Important**: Use `read_file` for LOCAL files only, not URLs

2. **Synthesize the information** - Create a comprehensive response that:
    - Directly answers the original question
    - Integrates insights from all subtopics
    - Cites specific sources with URLs (from the findings files)
    - Identifies any gaps or limitations

3. **Write final report** (optional) - Use `write_file` to create `research_[topic_name]/research_report.md` if requested

**Note**: If you need to fetch additional information from URLs, use the `fetch_url` tool, not `read_file`.

## Available Tools

You have access to:
- **write_file**: Save research plans and findings to local files
- **read_file**: Read local files (e.g., findings saved by subagents)
- **list_files**: See what local files exist in a directory
- **fetch_url**: Fetch content from URLs and convert to markdown (use this for web pages, not read_file)
- **task**: Spawn research subagents with web_search access

## Research Subagent Configuration

Each subagent you spawn will have access to:
- **web_search**: Search the web using Tavily (parameters: query, max_results, topic, include_raw_content)
- **write_file**: Save their findings to the filesystem

## Best Practices

- **Plan before delegating** - Always write research_plan.md first
- **Clear subtopics** - Ensure each subagent has distinct, non-overlapping scope
- **File-based communication** - Have subagents save findings to files, not return them directly
- **Systematic synthesis** - Read all findings files before creating final response
- **Stop appropriately** - Don't over-research; 3-5 searches per subtopic is usually sufficient

Overview

This skill provides a repeatable, file-driven workflow for conducting comprehensive web research by spawning focused subagents. It emphasizes upfront planning, controlled delegation, and a systematic synthesis step to produce well-sourced answers and optional final reports. The process keeps all artifacts organized in a local research folder and limits web searches to avoid over-researching.

How this skill works

First, create a dedicated research folder and write a short research plan defining the main question and 2–5 distinct subtopics. Next, spawn research subagents (tasks) for each subtopic; each subagent performs a small number of web searches and writes findings to local files. Finally, read the saved findings, synthesize a unified answer with citations and identified gaps, and optionally write a final report back to the research folder.

When to use it

  • Research complex topics that require multiple distinct information threads
  • Gather current, web-based evidence and synthesize it into a single report
  • Perform comparative analysis across technologies, vendors, or methods
  • Produce well-cited research deliverables for stakeholders
  • Break down a large investigation into parallel, bounded subtasks

Best practices

  • Always create research_[topic_name] and a research_plan.md before spawning subagents
  • Define 2–5 non-overlapping subtopics; keep each subagent’s scope narrow and specific
  • Limit each subagent to 3–5 web searches to maintain focus and efficiency
  • Require subagents to write findings to files; use local read_file to collect results
  • Synthesize by reading all findings files, cite URLs from those files, and note limitations

Example use cases

  • Compare pricing, features, and user reviews for three cloud AI providers
  • Compile recent academic and industry findings on a niche machine learning technique
  • Create a vendor-neutral report on security best practices for Java web apps
  • Survey regulatory developments across multiple countries for compliance planning
  • Produce a concise, sourced briefing for product management or executive teams

FAQ

Do I have to use the local file workflow?

Yes. The workflow relies on subagents saving findings to local files so you can read and synthesize them reliably.

How many subagents should I run in parallel?

You can run up to three subagents in parallel for efficiency; choose parallelism based on available attention for synthesis.