Deep Research (Tavily) MCP server

Enables comprehensive web research by leveraging Tavily's Search and Crawl APIs to aggregate information from multiple sources, extract detailed content, and structure data specifically for generating technical documentation and research reports.
Back to servers
Setup instructions
Provider
Pink Pixel
Release date
May 11, 2025
Language
Go
Stats
12 stars

Deep Research MCP Server is a Model Context Protocol (MCP) compliant server designed to perform comprehensive web research using Tavily's Search and Crawl APIs. It gathers extensive information on a topic and produces structured JSON output tailored for Large Language Models to create detailed markdown documents.

Installation

Prerequisites

  • Node.js (version 18.x or later recommended)
  • npm or Yarn

Installation Options

Using Smithery (for Claude Desktop):

npx -y @smithery/cli install @pinkpixel/dev-deep-research-mcp --client claude

Quick Use with NPX (Recommended):

npx @pinkpixel/deep-research-mcp

Global Installation:

npm install -g @pinkpixel/deep-research-mcp

Then run with:

deep-research-mcp

Configuration

Required: Tavily API Key

Set your Tavily API key using one of these methods:

In a .env file:

TAVILY_API_KEY="tvly-YOUR_ACTUAL_API_KEY"

In command line:

TAVILY_API_KEY="tvly-YOUR_ACTUAL_API_KEY" npx @pinkpixel/deep-research-mcp

Optional: Custom Documentation Prompt

Override the default documentation prompt in order of precedence:

  1. Tool argument (documentation_prompt parameter in tool call)
  2. Environment variable (DOCUMENTATION_PROMPT)
  3. Default built-in prompt

Setting via .env file:

DOCUMENTATION_PROMPT="Your custom, detailed instructions for the LLM..."

Optional: Output Path Configuration

Specify where research documents and images should be saved in order of precedence:

  1. Tool argument (output_path parameter in tool call)
  2. Environment variable (RESEARCH_OUTPUT_PATH)
  3. Default path (timestamped subfolder in user's Documents folder)

Setting via .env file:

RESEARCH_OUTPUT_PATH="/path/to/your/research/folder"

Optional: Timeout and Performance Configuration

Configure timeout and performance settings:

SEARCH_TIMEOUT=120
CRAWL_TIMEOUT=300
MAX_SEARCH_RESULTS=10
CRAWL_MAX_DEPTH=2
CRAWL_LIMIT=15

Optional: File Writing Configuration

Enable secure file writing (disabled by default):

FILE_WRITE_ENABLED=true
ALLOWED_WRITE_PATHS=/home/user/research,/home/user/documents
FILE_WRITE_LINE_LIMIT=500

Using the Server

The deep-research-tool accepts the following parameters:

General Parameters

  • query (string, required): The main research topic or question
  • documentation_prompt (string, optional): Custom prompt for documentation generation
  • output_path (string, optional): Path where research documents should be saved

Search Parameters

  • search_depth (string, optional, default: "advanced"): "basic" or "advanced"
  • topic (string, optional, default: "general"): "general" or "news"
  • days (number, optional): For news topics, days back to include results
  • time_range (string, optional): Time range for search results (e.g., "d", "w", "m", "y")
  • max_search_results (number, optional, default: 7): Maximum search results (1-20)
  • include_answer (boolean or string, optional, default: false): Include an LLM-generated summary
  • search_timeout (number, optional, default: 60): Timeout in seconds

Crawl Parameters

  • crawl_max_depth (number, optional, default: 1): Depth of crawl from base URL
  • crawl_max_breadth (number, optional, default: 5): Links to follow per page
  • crawl_limit (number, optional, default: 10): Total links to process per root URL
  • crawl_instructions (string, optional): Natural language instructions for crawler
  • crawl_include_images (boolean, optional, default: true): Extract image URLs
  • crawl_timeout (number, optional, default: 180): Timeout in seconds

Example Usage

Here's an example of how to call the tool:

{
  "name": "deep-research-tool",
  "arguments": {
    "query": "Explain the architecture of modern data lakes and data lakehouses.",
    "max_search_results": 5,
    "search_depth": "advanced",
    "topic": "general",
    "crawl_max_depth": 1,
    "include_answer": true,
    "output_path": "/home/username/Documents/research/datalakes-whitepaper"
  }
}

The tool returns a JSON string containing:

  • Documentation instructions
  • Original query
  • Search summary (if requested)
  • Research data with detailed findings from each source
  • Output path for saving files

Troubleshooting

  • API Key Errors: Ensure TAVILY_API_KEY is correctly set and valid
  • No Output/Errors: Check server console logs for error messages
  • Timeout Errors: Increase SEARCH_TIMEOUT and CRAWL_TIMEOUT values

How to install this MCP server

For Claude Code

To add this MCP server to Claude Code, run this command in your terminal:

claude mcp add-json "deep-research" '{"command":"npx","args":["-y","@pinkpixel/deep-research-mcp"],"env":{"TAVILY_API_KEY":"tvly-YOUR_ACTUAL_API_KEY_HERE"}}'

See the official Claude Code MCP documentation for more details.

For Cursor

There are two ways to add an MCP server to Cursor. The most common way is to add the server globally in the ~/.cursor/mcp.json file so that it is available in all of your projects.

If you only need the server in a single project, you can add it to the project instead by creating or adding it to the .cursor/mcp.json file.

Adding an MCP server to Cursor globally

To add a global MCP server go to Cursor Settings > Tools & Integrations and click "New MCP Server".

When you click that button the ~/.cursor/mcp.json file will be opened and you can add your server like this:

{
    "mcpServers": {
        "deep-research": {
            "command": "npx",
            "args": [
                "-y",
                "@pinkpixel/deep-research-mcp"
            ],
            "env": {
                "TAVILY_API_KEY": "tvly-YOUR_ACTUAL_API_KEY_HERE"
            }
        }
    }
}

Adding an MCP server to a project

To add an MCP server to a project you can create a new .cursor/mcp.json file or add it to the existing one. This will look exactly the same as the global MCP server example above.

How to use the MCP server

Once the server is installed, you might need to head back to Settings > MCP and click the refresh button.

The Cursor agent will then be able to see the available tools the added MCP server has available and will call them when it needs to.

You can also explicitly ask the agent to use the tool by mentioning the tool name and describing what the function does.

For Claude Desktop

To add this MCP server to Claude Desktop:

1. Find your configuration file:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
  • Linux: ~/.config/Claude/claude_desktop_config.json

2. Add this to your configuration file:

{
    "mcpServers": {
        "deep-research": {
            "command": "npx",
            "args": [
                "-y",
                "@pinkpixel/deep-research-mcp"
            ],
            "env": {
                "TAVILY_API_KEY": "tvly-YOUR_ACTUAL_API_KEY_HERE"
            }
        }
    }
}

3. Restart Claude Desktop for the changes to take effect

Want to 10x your AI skills?

Get a free account and learn to code + market your apps using AI (with or without vibes!).

Nah, maybe later