home / skills / openclaw / skills / web-searcher

web-searcher skill

/skills/kassimisai/web-searcher

This skill helps you conduct autonomous web research, synthesize findings, and deliver structured reports across multiple sources.

npx playbooks add skill openclaw/skills --skill web-searcher

Review the files below or copy the command above to add this skill to your agents.

Files (3)
SKILL.md
2.6 KB
---
name: web-searcher
description: Autonomous web research agent that performs multi-step searches, follows links, extracts data, and synthesizes findings into structured reports. Use when asked to research a topic, find information across multiple sources, compare options, gather market data, compile lists, or answer questions requiring deep web investigation beyond a single search.
---

# Web Searcher Agent

## Workflow

1. **Parse the query** — Break the user's request into 2-5 specific search queries that cover different angles of the topic.

2. **Search phase** — Execute searches using `web_search`. Rate limit: max 3 searches, then assess before continuing.

3. **Deep dive phase** — For promising results, use `web_fetch` to extract full content. Prioritize:
   - Primary sources over aggregators
   - Recent content over old (check dates)
   - Authoritative domains over random blogs

4. **Cross-reference** — Compare findings across sources. Flag contradictions. Note consensus.

5. **Synthesize** — Compile findings into a clear, structured response with:
   - Key findings (bullet points)
   - Sources cited (URLs)
   - Confidence level (high/medium/low per claim)
   - Gaps identified (what couldn't be found)

## Search Strategies

### Factual queries
Search → verify across 2+ sources → report with citations.

### Comparison/market research
Search each option separately → fetch detail pages → build comparison table → recommend.

### People/company research
Search name + context → fetch LinkedIn/company pages → cross-reference news → compile profile.

### How-to/technical
Search with specific technical terms → fetch documentation/guides → distill steps.

## Guidelines

- **Max 10 searches per task** to avoid rate limits and token waste.
- **Max 5 page fetches** — be selective about which URLs to deep-dive.
- Always include source URLs so the user can verify.
- If a search returns nothing useful, rephrase and retry once before moving on.
- For time-sensitive info, use `freshness` parameter (pd/pw/pm/py).
- Prefer `web_fetch` with `maxChars: 5000` to keep context manageable.
- If the task is massive, suggest breaking it into sub-tasks or spawning sub-agents.

## Output Format

```
## [Topic]

### Key Findings
- Finding 1 (Source: url)
- Finding 2 (Source: url)

### Details
[Expanded analysis]

### Sources
1. [Title](url) — what was found here
2. [Title](url) — what was found here

### Confidence & Gaps
- High confidence: [claims well-supported]
- Low confidence: [claims with limited sources]
- Not found: [what couldn't be determined]
```

Overview

This skill is an autonomous web research agent that performs multi-step searches, follows links, extracts content, and synthesizes findings into structured reports. It is built to handle complex research requests that require cross-referencing multiple sources and producing verifiable summaries. Use it when you need consolidated, cited information from across the web rather than a single search result.

How this skill works

The agent parses your query into focused sub-queries, runs up to a limited number of targeted web searches, and selects promising pages for deep fetches. It prioritizes primary and recent sources, extracts content, cross-references claims, flags contradictions, and synthesizes results into a report with citations, confidence levels, and identified gaps. It enforces limits (max ~10 searches, max ~5 deep fetches) to stay efficient and avoid rate limits.

When to use it

  • Research topics requiring multi-source verification
  • Compare products, services, or market options across sources
  • Compile lists, datasets, or directories from the live web
  • Produce profiles for people or companies with cross-referenced facts
  • Gather time-sensitive or evolving information that needs date checks

Best practices

  • Provide a clear, specific query and any preferred sources or date ranges
  • Indicate desired depth (brief summary vs full investigative report) to control search/fetch limits
  • Ask to break very large tasks into sub-tasks to keep reports focused and fast
  • Request citation format you prefer (inline bullets, numbered list, or markdown) for easier verification
  • Specify freshness requirement (day/week/month/year) for time-sensitive topics

Example use cases

  • Market comparison: evaluate 3 competing SaaS tools, gather pricing, features, and recent reviews
  • Competitive intelligence: compile public information on a company’s product launches, funding, and news mentions
  • Technical research: find official docs and community guides to synthesize a step-by-step how-to
  • People research: build a short profile combining LinkedIn, company pages, and news coverage
  • List compilation: crawl and consolidate top resources, datasets, or regulatory references for a topic

FAQ

How many sources will you check?

I start with up to 3 targeted searches, reassess, and will not exceed about 10 searches total. I deep-fetch up to 5 pages to keep work focused.

Will you include links so I can verify claims?

Yes. Every report includes source URLs for key findings and a numbered sources section so you can inspect originals.

Can you handle very large research tasks?

For massive topics I recommend splitting into sub-tasks or spawning sub-agents; I’ll suggest a breakdown if the request is too big.