home / mcp / deep research mcp server

Deep Research MCP Server

Provides a multi-backend AI research server exposing an MCP HTTP endpoint for deep, iterative topic exploration and markdown reporting.

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "teelaitila-deep-research-mcp": {
      "url": "http://localhost:3000/mcp",
      "headers": {
        "XAI_API_KEY": "YOUR_XAI_API_KEY",
        "GOOGLE_API_KEY": "YOUR_GOOGLE_API_KEY",
        "OPENAI_API_KEY": "YOUR_OPENAI_API_KEY",
        "ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY"
      }
    }
  }
}

You deploy and interact with an MCP server that acts as an AI-powered research assistant, enabling deep, iterative research on any topic. You can access it via an MCP client over HTTP or run it locally for development, enabling automated reporting, source evaluation, and follow-up question generation to shape your research projects.

How to use

Connect to the MCP server from your MCP client using the HTTP endpoint when the server is running locally or remotely. The server focuses on deep research, generates targeted queries, evaluates source reliability, and produces comprehensive markdown reports that include findings, sources, and reliability assessments. For MCP usage, you will typically submit a query and receive structured results that guide further exploration or conclude with a markdown report.

How to install

Prerequisites: You need Node.js and npm installed on your system. You will also configure environment variables for API providers if you plan to use multiple backends.

Step 1 — Clone the project repository and install dependencies.

git clone https://github.com/Ozamatash/deep-research
cd deep-research
npm install

Step 2 — Prepare environment variables for API access. Create a local environment file and populate keys for the providers you plan to use.

# Copy the example environment file
cp .env.example .env.local

# Add your API keys in .env.local, for example:
OPENAI_API_KEY=YOUR_OPENAI_API_KEY
ANTHROPIC_API_KEY=YOUR_ANTHROPIC_API_KEY
GOOGLE_API_KEY=YOUR_GOOGLE_API_KEY
XAI_API_KEY=YOUR_XAI_API_KEY

Step 3 — Build and run the server. Build the project, then start the HTTP server for MCP access.

# Build the server
npm run build

# Run the HTTP MCP server
npm run start:http

Step 4 — Connect with an MCP client. Point your client to the HTTP endpoint shown by the server when it starts, typically http://localhost:3000/mcp, and begin sending queries.

Additional notes

If you want to run the CLI version locally, you can start with npm run start. The HTTP variant is suitable for MCP clients and remote access, while the CLI is useful for manual testing and rapid experimentation.

Configuration and security

Configure API keys and endpoints in a local environment file. Do not expose your keys in public repos. The server can operate with multiple providers; ensure you set the keys for the providers you intend to use.

Troubleshooting

If the server does not respond at http://localhost:3000/mcp, verify that the HTTP start command is running and that there are no port conflicts. Check that your environment file (.env.local) is correctly loaded and that API keys are valid.

Notes

The MCP server supports generating markdown reports and evaluating source reliability. It can be used as a Model Context Protocol tool for AI agents and offers iterative research with depth and breadth controls, as described in its features.

Available tools

Deep research engine

Performs deep, iterative research by generating targeted search queries, querying sources, scoring reliability, and producing structured outputs.

Source reliability evaluator

Assigns reliability scores to sources (0-1) and explains reasoning to prioritize high-quality information.

Markdown report generator

Produces detailed markdown reports with findings, source metadata, and reliability assessments.

Follow-up question generator

Creates subsequent questions to refine research scope and clarify needs.

MCP tool integration

Exposes research capabilities as an MCP endpoint usable by AI agents.