Home / MCP / Firecrawl MCP Server

Firecrawl MCP Server

Provides web scraping, crawling, search, and extraction via MCP for automated data discovery and processing.

javascript
Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
    "mcpServers": {
        "firecrawl_http": {
            "url": "http://localhost:3000/mcp"
        }
    }
}

You can run the Firecrawl MCP Server to integrate Firecrawl web scraping capabilities with your own workflows. This MCP server lets you discover, crawl, search, scrape, and extract data from the web with built‑in retry and rate‑limit handling, and supports cloud or self‑hosted setups.

How to use

Use an MCP client to connect to the Firecrawl MCP Server and perform web data tasks. You can start a local HTTP/MCP endpoint or run a local stdio server to manage tasks like scraping a single page, batch scraping multiple URLs, mapping a site to discover URLs, performing web searches, crawling entire sections, or extracting structured data from pages. The server handles automatic retries, rate limiting, and progress tracking, so you can focus on getting the data you need and integrating results into your workflow.

How to install

Prerequisites: you need Node.js and npm (or an environment that can run npm/npx). Install or run the server using one of the provided methods, then configure your MCP client to connect.

Configuration and startup methods

You have multiple ways to run the server, depending on your workflow and environment. The examples below are the explicit ways you can start and connect your MCP client to Firecrawl MCP.

HTTP (remote MCP endpoint) config

{
  "type": "http",
  "name": "firecrawl_http_mcp",
  "url": "http://localhost:3000/mcp",
  "args": []
}

STDIO (local MCP server) config

{
  "type": "stdio",
  "name": "firecrawl_mcp",
  "command": "npx",
  "args": ["-y", "firecrawl-mcp"],
  "env": {
    "FIRECRAWL_API_KEY": "YOUR_API_KEY"
  }
}

Running in Streamable HTTP Local Mode

To run the server locally with Streamable HTTP, set the HTTP streamable mode and provide your API key, then start the MCP server. The provided URL for access is the local endpoint.

Environment variables to set

Required for cloud API usage: - FIRECRAWL_API_KEY: Your Firecrawl API key (required when using cloud API) - FIRECRAWL_API_URL: Optional for self-hosted instances to specify a custom endpoint Optional retry and credit monitoring options may be set to tune behavior, such as FIRECRAWL_RETRY_MAX_ATTEMPTS and FIRECRAWL_CREDIT_WARNING_THRESHOLD.

Usage patterns you can perform

- Scrape a single page with scrape to get content in markdown or HTML. - Batch scrape a list of known URLs to get multiple pages efficiently. - Map a site to discover all indexed URLs before deciding what to scrape. - Search the web for specific topics and optionally extract content from results. - Crawl a site to extract content across multiple pages with depth and page limits. - Extract structured data from pages using a prompt and a schema.

Security and reliability notes

The server includes automatic retries with exponential backoff and rate‑limit handling to minimize failed requests due to transient errors. Monitor credit usage for cloud API and adjust thresholds to avoid interruptions. Use self‑hosted setups behind proper auth and access controls when integrating into production workflows.

Examples of common commands you’ll run

Start a local MCP server with an API key:

- Run the stdio configuration with your API key via npx to start the Firecrawl MCP server locally.

Notes on upgrading and contributing

Keep dependencies up to date and run tests when you modify server logic. If you contribute, ensure that new features are covered by tests and document any new environment variables or configuration options.

Available tools

firecrawl_scrape

Scrape content from a single URL with options for formats, main content, wait time, timeouts, allowed/disallowed tags, and TLS behavior.

firecrawl_batch_scrape

Scrape multiple known URLs efficiently with built‑in rate limiting and parallel processing, returning an operation ID for status checks.

firecrawl_map

Discover all indexed URLs on a site to plan scraping or crawling efforts.

firecrawl_search

Search the web for information and optionally scrape content from results to build a set of relevant results.

firecrawl_crawl

Start an asynchronous crawl to extract content from multiple pages with depth and page limits.

firecrawl_extract

Extract structured data from pages using an LLM, with a schema and optional web search for context.

firecrawl_check_batch_status

Check the status of a batch scrape operation using its ID.

firecrawl_check_crawl_status

Check the status of a crawl job using its ID.