Home / MCP / Firecrawl MCP Server
Provides web scraping, crawling, search, and extraction via MCP for automated data discovery and processing.
Configuration
View docs{
"mcpServers": {
"firecrawl_http": {
"url": "http://localhost:3000/mcp"
}
}
}You can run the Firecrawl MCP Server to integrate Firecrawl web scraping capabilities with your own workflows. This MCP server lets you discover, crawl, search, scrape, and extract data from the web with built‑in retry and rate‑limit handling, and supports cloud or self‑hosted setups.
Use an MCP client to connect to the Firecrawl MCP Server and perform web data tasks. You can start a local HTTP/MCP endpoint or run a local stdio server to manage tasks like scraping a single page, batch scraping multiple URLs, mapping a site to discover URLs, performing web searches, crawling entire sections, or extracting structured data from pages. The server handles automatic retries, rate limiting, and progress tracking, so you can focus on getting the data you need and integrating results into your workflow.
Prerequisites: you need Node.js and npm (or an environment that can run npm/npx). Install or run the server using one of the provided methods, then configure your MCP client to connect.
You have multiple ways to run the server, depending on your workflow and environment. The examples below are the explicit ways you can start and connect your MCP client to Firecrawl MCP.
{
"type": "http",
"name": "firecrawl_http_mcp",
"url": "http://localhost:3000/mcp",
"args": []
}{
"type": "stdio",
"name": "firecrawl_mcp",
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "YOUR_API_KEY"
}
}To run the server locally with Streamable HTTP, set the HTTP streamable mode and provide your API key, then start the MCP server. The provided URL for access is the local endpoint.
Required for cloud API usage: - FIRECRAWL_API_KEY: Your Firecrawl API key (required when using cloud API) - FIRECRAWL_API_URL: Optional for self-hosted instances to specify a custom endpoint Optional retry and credit monitoring options may be set to tune behavior, such as FIRECRAWL_RETRY_MAX_ATTEMPTS and FIRECRAWL_CREDIT_WARNING_THRESHOLD.
- Scrape a single page with scrape to get content in markdown or HTML. - Batch scrape a list of known URLs to get multiple pages efficiently. - Map a site to discover all indexed URLs before deciding what to scrape. - Search the web for specific topics and optionally extract content from results. - Crawl a site to extract content across multiple pages with depth and page limits. - Extract structured data from pages using a prompt and a schema.
The server includes automatic retries with exponential backoff and rate‑limit handling to minimize failed requests due to transient errors. Monitor credit usage for cloud API and adjust thresholds to avoid interruptions. Use self‑hosted setups behind proper auth and access controls when integrating into production workflows.
Start a local MCP server with an API key:
- Run the stdio configuration with your API key via npx to start the Firecrawl MCP server locally.Keep dependencies up to date and run tests when you modify server logic. If you contribute, ensure that new features are covered by tests and document any new environment variables or configuration options.
Scrape content from a single URL with options for formats, main content, wait time, timeouts, allowed/disallowed tags, and TLS behavior.
Scrape multiple known URLs efficiently with built‑in rate limiting and parallel processing, returning an operation ID for status checks.
Discover all indexed URLs on a site to plan scraping or crawling efforts.
Search the web for information and optionally scrape content from results to build a set of relevant results.
Start an asynchronous crawl to extract content from multiple pages with depth and page limits.
Extract structured data from pages using an LLM, with a schema and optional web search for context.
Check the status of a batch scrape operation using its ID.
Check the status of a crawl job using its ID.