home / mcp / firecrawl mcp server
Provides web scraping, crawling, search, and data extraction via an MCP server with retry, rate limiting, and optional cloud or self‑hosted usage.
Configuration
View docs{
"mcpServers": {
"ashishdevthakur3-max-firecrawl-mcp": {
"command": "npx",
"args": [
"-y",
"firecrawl-mcp"
],
"env": {
"SSE_LOCAL": "true",
"FIRECRAWL_API_KEY": "YOUR_API_KEY",
"FIRECRAWL_API_URL": "https://firecrawl.your-domain.com",
"FIRECRAWL_RETRY_MAX_DELAY": "30000",
"FIRECRAWL_RETRY_MAX_ATTEMPTS": "5",
"FIRECRAWL_RETRY_INITIAL_DELAY": "2000",
"FIRECRAWL_RETRY_BACKOFF_FACTOR": "3",
"FIRECRAWL_CREDIT_WARNING_THRESHOLD": "2000",
"FIRECRAWL_CREDIT_CRITICAL_THRESHOLD": "500"
}
}
}
}Firecrawl MCP Server lets you harness Firecrawl’s web scraping capabilities through a dedicated MCP server. You run the server locally or in your environment, connect via an MCP client, and perform scraping, crawling, searching, and extraction with built‑in retry, rate limiting, and optional cloud or self‑hosted LLM support.
You use Firecrawl MCP Server by starting a local or remote MCP process and connecting an MCP client. Provide your API key to enable cloud scraping by default, or point to a self‑hosted API URL if you have your own Firecrawl instance. Once running, choose the tool you want to run (for example scrape or map) and provide the target URLs or discovery parameters through your MCP client. You’ll receive results, progress updates, and status checks as the operation executes. If you need to fine‑tune behavior, adjust retry and credit thresholds via environment variables to control how aggressively the server retries requests and tracks API credits.
Prerequisites: you need Node.js and npm installed on your machine. You may also install via package managers if you prefer global installation. You will run the MCP server using a simple command that invokes Firecrawl’s MCP processor.
Install using a global package (recommended for quick start):
npm install -g firecrawl-mcpOr run directly with npx if you want to avoid a global install:
env FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcpOptional: run in Server‑Sent Events (SSE) local mode to observe streaming updates instead of the default transport:
env SSE_LOCAL=true FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcp
```
Use the URL http://localhost:3000/sse to view streaming results.Cursor configuration (examples you can adapt for your setup)
{
"mcpServers": {
"firecrawl-mcp": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "YOUR-API-KEY"
}
}
}
}Windsurf configuration example (model config snippet)
{
"mcpServers": {
"mcp-server-firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "YOUR_API_KEY"
}
}
}
}VS Code manual setup for quick local runs (JSON block)
{
"mcp": {
"inputs": [
{
"type": "promptString",
"id": "apiKey",
"description": "Firecrawl API Key",
"password": true
}
],
"servers": {
"firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "${input:apiKey}"
}
}
}
}
}If you prefer a self‑hosted API, set FIRECRAWL_API_URL to your endpoint. You can also adjust retry behavior and credit monitoring to suit your workload.
Keep your API key secure and only expose it to trusted environments. When running locally, avoid publishing your environment variable values in shared scripts or configuration files. Use environment management tools or secret vaults where possible.
If you see rate limit or transient errors, rely on automatic retries with exponential backoff. Check logs for messages about retry attempts, credit warnings, or rate limit status. If you switch to a self‑hosted endpoint, ensure the API URL is reachable and the key matches the environment’s authentication requirements.
The server supports a range of tools for web scraping, discovery, and data extraction. Use the client to select the appropriate tool for your task and provide the necessary inputs.
Scrape content from a single URL with advanced options, returning markdown or HTML according to the requested formats.
Scrape multiple URLs efficiently with built‑in rate limiting and parallel processing; returns an operation ID for status checks.
Discover all indexed URLs on a site to understand its structure before scraping.
Search the web for information and optionally extract content from results.
Start an asynchronous crawl to extract content from multiple pages within a site, with controls for depth and limits.
Extract structured data from pages using LLM capabilities, supporting custom prompts and schemas.
Conduct deep, multi-source web research with analysis and sources.
Generate a standardized llms.txt file for a domain to guide AI interactions with the site.