home / skills / firecrawl / cli / firecrawl-search
This skill helps you find and extract web content by performing targeted searches and optionally scraping full pages for rich results.
npx playbooks add skill firecrawl/cli --skill firecrawl-searchReview the files below or copy the command above to add this skill to your agents.
---
name: firecrawl-search
description: |
Web search with full page content extraction. Use this skill whenever the user asks to search the web, find articles, research a topic, look something up, find recent news, discover sources, or says "search for", "find me", "look up", "what are people saying about", or "find articles about". Returns real search results with optional full-page markdown — not just snippets. Provides capabilities beyond Claude's built-in WebSearch.
allowed-tools:
- Bash(firecrawl *)
- Bash(npx firecrawl *)
---
# firecrawl search
Web search with optional content scraping. Returns search results as JSON, optionally with full page content.
## When to use
- You don't have a specific URL yet
- You need to find pages, answer questions, or discover sources
- First step in the [workflow escalation pattern](firecrawl-cli): search → scrape → map → crawl → browser
## Quick start
```bash
# Basic search
firecrawl search "your query" -o .firecrawl/result.json --json
# Search and scrape full page content from results
firecrawl search "your query" --scrape -o .firecrawl/scraped.json --json
# News from the past day
firecrawl search "your query" --sources news --tbs qdr:d -o .firecrawl/news.json --json
```
## Options
| Option | Description |
| ------------------------------------ | --------------------------------------------- |
| `--limit <n>` | Max number of results |
| `--sources <web,images,news>` | Source types to search |
| `--categories <github,research,pdf>` | Filter by category |
| `--tbs <qdr:h\|d\|w\|m\|y>` | Time-based search filter |
| `--location` | Location for search results |
| `--country <code>` | Country code for search |
| `--scrape` | Also scrape full page content for each result |
| `--scrape-formats` | Formats when scraping (default: markdown) |
| `-o, --output <path>` | Output file path |
| `--json` | Output as JSON |
## Tips
- **`--scrape` fetches full content** — don't re-scrape URLs from search results. This saves credits and avoids redundant fetches.
- Always write results to `.firecrawl/` with `-o` to avoid context window bloat.
- Use `jq` to extract URLs or titles: `jq -r '.data.web[].url' .firecrawl/search.json`
- Naming convention: `.firecrawl/search-{query}.json` or `.firecrawl/search-{query}-scraped.json`
## See also
- [firecrawl-scrape](../firecrawl-scrape/SKILL.md) — scrape a specific URL
- [firecrawl-map](../firecrawl-map/SKILL.md) — discover URLs within a site
- [firecrawl-crawl](../firecrawl-crawl/SKILL.md) — bulk extract from a site
This skill provides web search with optional full-page content extraction, returning real search results as structured JSON and, when requested, complete page content in Markdown. It is designed to find articles, news, and sources and to feed downstream scraping or crawling workflows without relying on short snippets. Use it whenever you need thorough, machine-readable search output rather than brief summaries.
The skill runs a web search query and returns ranked results as JSON, including metadata like title, URL, snippet, and source type. With the --scrape option it fetches and converts each result page into a Markdown representation (or other formats), embedding full text so an agent can read or index entire pages. It supports filters for sources, categories, time ranges, location, and result limits.
Can I get full page content from search results?
Yes — enable the --scrape flag to fetch and convert each result page into full-page Markdown or specified scrape formats.
How do I avoid bloating the agent context with long pages?
Output results to a file (e.g., .firecrawl/search.json or .firecrawl/scraped.json) and reference that file in downstream steps instead of pasting content into the agent chat.