home / mcp / gpt researcher mcp server
MCP server for enabling LLM applications to perform deep research via the MCP protocol
Configuration
View docs{
"mcpServers": {
"assafelovic-gptr-mcp": {
"command": "python",
"args": [
"/absolute/path/to/server.py"
],
"env": {
"OPENAI_API_KEY": "YOUR_OPENAI_API_KEY",
"TAVILY_API_KEY": "YOUR_TAVILY_API_KEY"
}
}
}
}The GPT Researcher MCP Server provides a programmable bridge to perform deep web research through MCP clients. It autonomously explores and validates numerous sources to return high-quality, up-to-date information while optimizing context usage for AI workflows.
You use the GPT Researcher MCP Server by connecting an MCP client to its standard input/output interface or to its SSE-based transport when running in a container or web environment. Start a session, initialize the MCP connection, and send tool requests to perform deep research, fast searches, and report generation. The server manages sessions, runs the requested tools, and returns structured results that you can feed into your AI prompts.
Prerequisites: ensure you have Python 3.11 or higher installed, and obtain API keys for the services you plan to use (for example, OpenAI and Tavily). You will also need a working network configuration if you plan to run in Docker.
1. Clone the GPT Researcher repository and prepare the MCP components.
git clone https://github.com/assafelovic/gpt-researcher.git
cd gpt-researcher2. Set up the MCP dependencies for the GPT Researcher project.
cd gptr-mcp
pip install -r requirements.txt3. Create and configure environment variables. Copy the example environment file to a new .env file and populate your keys.
cp .env.example .env
```
Then add your keys, for example:
OPENAI_API_KEY=your_openai_api_key
TAVILY_API_KEY=your_tavily_api_keyThis server supports multiple transport modes and automatically adapts to your environment. For local development, you typically run the Python server directly and use STDIO transport. For production or web deployments, the Docker setup enables SSE transport and Docker networking for easy integration with other services.
When using Claude Desktop or similar clients, you configure a local MCP server entry that points to your Python server and passes the required API keys in the environment.
If you encounter issues, verify your API keys are set correctly, confirm you are using Python 3.11 or higher, and ensure dependencies are installed. Check server logs for error messages and confirm the server is reachable on the expected transport endpoint.
For Docker or container workflows, ensure the container is running, bound to 0.0.0.0:8000, and accessible from your host or network. Use the health endpoints and session-based MCP messaging to verify connectivity.
If you are integrating with n8n or other automation tools, verify networking between containers and use the container name as the hostname when forming the MCP URL.
To connect Claude Desktop to your local MCP server, provide a configuration entry that starts the GPT Researcher MCP Server as a subprocess and passes API keys via the config. Ensure paths are absolute and the environment carries all required keys.
{
"mcpServers": {
"gpt-researcher": {
"command": "python",
"args": ["/absolute/path/to/server.py"],
"env": {
"OPENAI_API_KEY": "your-actual-openai-key-here",
"TAVILY_API_KEY": "your-actual-tavily-key-here"
}
}
}
}Experiment with the available research tools: deep_research for in-depth sources, quick_search for fast results, and write_report to generate summaries. Use the session-based workflow to validate sources and gather context for your AI prompts.
The server supports multiple transport modes and uses a session-based approach when using SSE. When running in Docker, it typically binds to 0.0.0.0:8000 to allow container-to-container communication.
Performs deep web research on a topic, identifying the most reliable and up-to-date sources and extracting relevant information.
Executes a fast web search to retrieve concise results with snippets suitable for quick turnarounds.
Generates a structured report based on the gathered research results.
Retrieves the sources used during the research process for citation and review.
Provides the full context of the conducted research to support follow-up queries.
Creates a research query prompt used to drive the investigation.