This MCP server integrates Crawl4AI with Cursor AI, enabling LLMs in Cursor Composer's agent mode to perform web scraping and crawling operations. It serves as a bridge between Cursor AI and web content, allowing AI models to access and process information from websites.
First, install uv
, a Python package manager and environment manager:
MacOS/Linux:
curl -LsSf https://astral.sh/uv/install.sh | sh
Windows:
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
Restart your terminal after installation to ensure the uv
command is recognized.
git clone https://github.com/[username]/crawl4ai-mcp.git
# Navigate to the crawl4ai-mcp directory
cd crawl4ai-mcp
# Install dependencies
uv venv
uv sync
# Activate the virtual environment
source .venv/bin/activate # On Linux/Mac
# OR
.venv\Scripts\activate # On Windows
# Run the server
python main.py
Add the MCP server to Cursor's configuration:
Find the full path to your uv
executable by running:
which uv
where uv
Add the following JSON configuration to Cursor's MCP Servers settings:
{
"mcpServers": {
"Crawl4AI": {
"command": "uv",
"args": [
"--directory",
"/ABSOLUTE/PATH/TO/PARENT/FOLDER/crawl4ai-mcp",
"run",
"main.py"
]
}
}
}
Replace /ABSOLUTE/PATH/TO/PARENT/FOLDER/crawl4ai-mcp
with the actual path to the project directory.
The server provides two main tools that can be used by LLMs in Cursor Composer's agent mode:
The scrape_webpage
tool allows extraction of content from a single webpage:
scrape_webpage(url="https://example.com")
Parameters:
url
(string, required): The URL of the webpage to scrapeReturns: A list containing a TextContent
object with the scraped content in markdown format as JSON.
The crawl_website
tool enables crawling multiple pages within a website:
crawl_website(
url="https://example.com",
crawl_depth=2,
max_pages=10
)
Parameters:
url
(string, required): The starting URL to crawlcrawl_depth
(integer, optional, default: 1): Maximum depth to crawl relative to the starting URLmax_pages
(integer, optional, default: 5): Maximum number of pages to scrape during the crawlReturns: A list containing a TextContent
object with a JSON array of results for each crawled page (including URL, success status, markdown content, or error).
There are two ways to add an MCP server to Cursor. The most common way is to add the server globally in the ~/.cursor/mcp.json
file so that it is available in all of your projects.
If you only need the server in a single project, you can add it to the project instead by creating or adding it to the .cursor/mcp.json
file.
To add a global MCP server go to Cursor Settings > MCP and click "Add new global MCP server".
When you click that button the ~/.cursor/mcp.json
file will be opened and you can add your server like this:
{
"mcpServers": {
"cursor-rules-mcp": {
"command": "npx",
"args": [
"-y",
"cursor-rules-mcp"
]
}
}
}
To add an MCP server to a project you can create a new .cursor/mcp.json
file or add it to the existing one. This will look exactly the same as the global MCP server example above.
Once the server is installed, you might need to head back to Settings > MCP and click the refresh button.
The Cursor agent will then be able to see the available tools the added MCP server has available and will call them when it needs to.
You can also explictly ask the agent to use the tool by mentioning the tool name and describing what the function does.