home / mcp / katana mcp server
Provides integration of the Katana web crawler with Claude Desktop through the Model Context Protocol for automated web crawling.
Configuration
View docs{
"mcpServers": {
"an040702-katana-mcp-server": {
"command": "python",
"args": [
"/path/to/katana-server/katana_server.py"
]
}
}
}This MCP Server lets Claude Desktop access the Katana web crawler from ProjectDiscovery through the Model Context Protocol, enabling you to discover endpoints, paths, and hidden resources on web pages directly from Claude.
You can ask Claude Desktop to crawl websites using Katana through the MCP server you configured. Use natural language prompts to start crawls with various options, such as depth, headless browsing, and JavaScript crawling. You can target a single URL, multiple URLs, or a list loaded from a file. You can also request version checks or download JS assets discovered during crawling.
# Prerequisites
python3 --version
python --version
# Ensure Python 3.10+ is used
python3.10 -V
# Katana prerequisites
# Go 1.21+ should be installed to set up Katana
go version
# Install Katana
go install github.com/projectdiscovery/katana/cmd/katana@latest
# Verify Katana installation
katana --versionInstall the MCP server and its dependencies. You can clone or download the project, then install the Python dependencies.
cd katana-server
pip install -r requirements.txtConfigure Claude Desktop to connect to the MCP server. Open Claude Desktop and set the MCP server entry as shown in the examples below. The config file is typically located in your Claude data directory.
{
"mcpServers": {
"katana": {
"command": "python",
"args": [
"/path/to/katana-server/katana_server.py"
]
}
}
}{
"mcpServers": {
"katana": {
"command": "C:\\Python311\\python.exe",
"args": [
"C:\\path\\to\\katana-server\\katana_server.py"
]
}
}
}{
"mcpServers": {
"katana": {
"command": "python3",
"args": [
"/path/to/katana-server/katana_server.py"
]
}
}
}After saving the MCP server configuration, restart Claude Desktop to apply the changes and make Katana available through the Model Context Protocol.
L Troubleshooting common issues after setup include verifying Katana is installed and on PATH, confirming Claude Desktop recognizes the MCP server, and ensuring Python and dependencies are correctly installed.
Common checks:
- Katana readiness: run katana --version to confirm installation.
- MCP connectivity: restart Claude Desktop if you encounter connection errors and check Claude Desktop logs for details.
- Command validity: ensure the katana_server.py path is correct and the CLI command is executable.
This server exposes several Katana-related tools that you can invoke through Claude Desktop. They are designed to be used for targeted crawling tasks and data extraction.
- katana_crawl: Crawl a URL or a list of URLs with extensive options such as depth, concurrency, headless browsing, JavaScript crawling, form handling, and output formatting.
- katana_crawl_from_file: Crawl from a file containing URLs with configurable depth, concurrency, and output format.
- katana_check_version: Check the installed Katana version.
- katana_download_js: Crawl a site to locate JavaScript files, download them to a local directory, and report totals like discovered, downloaded, and failed files.
Examples of usage include crawls with JavaScript and headless browser, crawling multiple URLs with a specified depth, and saving responses to a local folder. Use natural language prompts to Claude Desktop to tailor the crawl to your needs, such as depth, scope, and output format.
Only crawl sites you are authorized to test. Respect robots.txt and any site-specific crawling policies. Limit concurrency and respect rate limits to avoid overloading target sites.
Crawl a URL or list of URLs with configurable depth, concurrency, and crawl options such as headless browsing, JavaScript crawling, and form handling.
Crawl from a file containing a list of URLs with depth, concurrency, and scope options.
Check the installed Katana version to verify compatibility.
Crawl to find JavaScript files and download them to a local directory, with options for depth, scope, and concurrency.