home / mcp / firecrawl mcp server
Provides web scraping, crawling, discovery, and content extraction via MCP with cloud and self-hosted options.
Configuration
View docs{
"mcpServers": {
"firecrawl-firecrawl-mcp-server": {
"url": "http://localhost:3000/mcp",
"headers": {
"FIRECRAWL_API_KEY": "YOUR_API_KEY",
"FIRECRAWL_API_URL": "https://firecrawl.your-domain.com",
"FIRECRAWL_RETRY_MAX_DELAY": "30000",
"FIRECRAWL_RETRY_MAX_ATTEMPTS": "5",
"FIRECRAWL_RETRY_INITIAL_DELAY": "2000",
"FIRECRAWL_RETRY_BACKOFF_FACTOR": "3",
"FIRECRAWL_CREDIT_WARNING_THRESHOLD": "2000",
"FIRECRAWL_CREDIT_CRITICAL_THRESHOLD": "500"
}
}
}
}You can use the Firecrawl MCP Server to access web scraping, crawling, and content extraction capabilities through a Model Context Protocol (MCP) interface. It enables automated retries, rate limiting, discovery, and batch processing, and supports both cloud and self-hosted deployments for flexible integration with your tools and workflows.
To use the Firecrawl MCP Server, connect your MCP client or integration to one of the provided endpoints or run it locally with the standard command. You can perform targeted actions like scraping a single page, scraping multiple known URLs, discovering URLs on a site, performing live web searches, or running complex multi-source research. You can also start long-running crawl jobs and extract structured data using guided prompts or schemas. When you start a tool, you receive a job or operation ID, which you poll to check results or status updates. The server automatically handles retries on transient failures and applies rate limiting to avoid overwhelming the target sites.
Prerequisites: ensure Node.js is installed on your system. You can verify with a command like node -v and npm -v. You may also need a Firecrawl API key if you plan to use the cloud API.
Option 1: Run with npx (quick start) Use this to start the MCP server directly without a local install.
env FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcpOption 2: Manual installation (local) Install the package globally and run from your environment.
npm install -g firecrawl-mcpYou can also connect through common development environments by adding the MCP server as a tool in your editor or integration platform. For example, you can configure an MCP server with an environment key for authentication and then reference that server in your workflow definitions. When you configure environments, you may see options for cloud API keys and self-hosted API URLs, as well as optional retry and credit monitoring settings.
To expose the local server over HTTP for testing, you can enable a streamable HTTP local mode and use the provided endpoint such as http://localhost:3000/mcp. This is useful for environments that prefer an HTTP-based MCP connection rather than a local stdio transport.
Keep your API keys secure and avoid sharing them in public code or logs. Use environment variables to inject keys into your runtime, and apply least-privilege principles for any account used by the MCP server. If you are operating in a self-hosted setup, prefer restricting access to the endpoint and rotating credentials regularly.
If you encounter rate limit errors, the server will automatically retry with exponential backoff. If you see persistent failures, check your API key validity, network connectivity, and whether the target API imposes new restrictions. Review logs for messages about rate limits, retries, or authentication issues to identify the root cause.
- Scrape a single page for structured data using a JSON schema to extract exact fields like name, price, and description.
- Map a site to discover all indexed URLs, then batch-scrape a subset of known pages for data collection.
- Run a multi-site search to gather relevant results and optionally scrape or extract targeted information from top results.
The MCP server supports cloud API usage with an API key and self-hosted instances via a configurable API URL. You can enable automatic retries and monitor credits to manage usage and avoid unexpected interruptions.
Extract content from a single URL with structured data formats, using a JSON schema to select the fields you need.
Scrape multiple known URLs efficiently with built-in rate limiting and parallel processing.
Discover all indexed URLs on a site to plan content extraction or crawling.
Perform web searches and optionally extract content from search results to find relevant information.
Start an asynchronous crawl job to extract content from multiple pages within a site, with depth and scope controls.
Use LLM capabilities to extract structured data from pages based on a provided schema and prompts.
Autonomous research agent that conducts multi-source web research, then returns structured results.
Check the status and retrieve results for an asynchronous agent job.