home / mcp / firecrawl mcp server
Provides web scraping, crawling, and data extraction via MCP for cloud or self-managed environments.
Configuration
View docs{
"mcpServers": {
"ampcome-mcps-firecrawl-mcp": {
"command": "npx",
"args": [
"-y",
"firecrawl-mcp"
],
"env": {
"FIRECRAWL_API_KEY": "YOUR_API_KEY",
"FIRECRAWL_API_URL": "https://firecrawl.your-domain.com",
"FIRECRAWL_RETRY_MAX_DELAY": "30000",
"FIRECRAWL_RETRY_MAX_ATTEMPTS": "5",
"FIRECRAWL_RETRY_INITIAL_DELAY": "2000",
"FIRECRAWL_RETRY_BACKOFF_FACTOR": "3",
"FIRECRAWL_CREDIT_WARNING_THRESHOLD": "2000",
"FIRECRAWL_CREDIT_CRITICAL_THRESHOLD": "500"
}
}
}
}You run the Firecrawl MCP Server to access powerful web scraping, crawling, and data extraction capabilities through a consistent MCP interface. It connects with Firecrawl to provide scalable scraping, discovery, data extraction, and batch processing, with built‑in retry and rate‑limit handling for reliability in cloud or self‑hosted environments.
Start the MCP server locally and connect your MCP client to perform web scraping tasks. You can run single-page extractions, batch scraping for multiple URLs, site mapping to discover URLs, search across the web, and in‑depth crawling for comprehensive coverage. Use the available tools to tailor your workflow: scrape a page, batch_scrape several URLs, map a site, search the web, extract structured data, crawl entire sections, conduct deep research, or generate an LLMs.txt for a domain.
Typical usage patterns include starting a server process, then sending tool requests through your MCP client. You will provide the target URLs, prompts, and any extraction schemas or formats. The server handles retries on rate limits, queuing, and parallel processing to optimize throughput while monitoring your API credits when using cloud access.
# Prerequisites: Node.js installed on your system
# Install and start the MCP server via npx
env FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcpIf you prefer a manual installation that you run locally, install the package globally and then start the server as needed.
npm install -g firecrawl-mcpTo run the server in a local development setup that uses Server-Sent Events (SSE) for transport instead of standard I/O, start with SSE enabled.
env SSE_LOCAL=true FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcpSet your API key and optional configuration values as environment variables when starting the server. The cloud API requires FIRECRAWL_API_KEY, while a self‑hosted instance may use FIRECRAWL_API_URL in place of the cloud endpoint. You can also adjust retry behavior and credit monitoring with dedicated environment variables.
export FIRECRAWL_API_KEY=your-api-key
export FIRECRAWL_API_URL=https://firecrawl.your-domain.com # if using self-hosted
export FIRECRAWL_RETRY_MAX_ATTEMPTS=5
export FIRECRAWL_RETRY_INITIAL_DELAY=2000
export FIRECRAWL_RETRY_MAX_DELAY=30000
export FIRECRAWL_RETRY_BACKOFF_FACTOR=3
export FIRECRAWL_CREDIT_WARNING_THRESHOLD=2000
export FIRECRAWL_CREDIT_CRITICAL_THRESHOLD=500If you are integrating with specific tooling like Cursor, Windsurf, or Claude Desktop, follow the corresponding configuration blocks and provide your API key when prompted. Ensure the MCP server name matches your client configuration and that the command and environment match the examples for the integration.
The server includes automatic retries for transient errors and rate-limit handling with exponential backoff. It also monitors credit usage when using a cloud API and provides warnings and critical alerts so you can prevent unexpected service interruptions.
If you encounter issues starting the server, check that FIRECRAWL_API_KEY is set correctly and that the Node.js environment is available. For SSE mode, confirm SSE_LOCAL is set to true and that the port (default 3000) is reachable if you are consuming via the SSE endpoint.
The server exposes a suite of tools you can invoke to perform different web data tasks. Below is a concise set of tools you can use according to your goal.
- Scrape: extract content from a single URL with formatting options.
- Batch Scrape: scrape multiple URLs efficiently with built‑in rate limiting.
- Map: discover all indexed URLs on a website to plan your scraping workflow.
- Search: perform a web search and optionally extract content from results.
- Crawl: start an asynchronous crawl of a site to extract content from many pages.
- Extract: use LLMs to pull structured data from pages with a defined schema.
- Deep Research: conduct multi-source, in‑depth research with structured outputs.
- Generate LLMs.txt: create a standardized LLMs.txt for site interaction policies.
The server can be started with a simple npx invocation and your API key supplied as an environment variable.
- Single URL scrape: extract the main content from a known page and format as markdown or HTML.
- Batch workflow: map a site, then batch scrape the discovered pages for consolidation.
1. Ensure the API key is valid and active for cloud usage. 2. If working self-hosted, verify the API URL is reachable. 3. Check retry settings if rate limits are encountered. 4. Verify the server is started using the correct command and environment variables.
Extract content from a single URL with formatting options (markdown/html) and control over content scope.
Scrape multiple known URLs efficiently with built‑in rate limiting and parallel processing.
Discover all URLs on a website to plan scraping and identify sections to target.
Perform a web search and optionally scrape content from results to compile relevant results.
Start an asynchronous crawl of a site to collect content from many pages with depth and limit controls.
Extract structured data from pages using LLMs with a defined schema.
Conduct in‑depth, multi-source research with AI analysis and sources.
Generate an LLMs.txt file for a domain to guide AI interactions with the site.