OpenAI WebSearch MCP server

Enables AI assistants to search the web in real-time through OpenAI's websearch functionality, retrieving up-to-date information beyond training data cutoffs with configurable search parameters.
Back to servers
Setup instructions
Provider
Conecho
Release date
Mar 12, 2025
Language
Python
Package
Stats
9.9K downloads
68 stars

The OpenAI WebSearch MCP Server provides intelligent web search capabilities through OpenAI's reasoning models, enabling AI assistants to access up-to-date information with smart reasoning capabilities.

Quick Installation

One-Click Installation for Claude Desktop

OPENAI_API_KEY=sk-xxxx uvx --with openai-websearch-mcp openai-websearch-mcp-install

Replace sk-xxxx with your OpenAI API key from the OpenAI Platform.

Using uvx (Recommended)

# Install and run directly
uvx openai-websearch-mcp

# Or install globally
uvx install openai-websearch-mcp

Using pip

# Install from PyPI
pip install openai-websearch-mcp

# Run the server
python -m openai_websearch_mcp

Configuration

Claude Desktop

Add to your claude_desktop_config.json:

{
  "mcpServers": {
    "openai-websearch-mcp": {
      "command": "uvx",
      "args": ["openai-websearch-mcp"],
      "env": {
        "OPENAI_API_KEY": "your-api-key-here",
        "OPENAI_DEFAULT_MODEL": "gpt-5-mini"
      }
    }
  }
}

Cursor

Add to your MCP settings in Cursor:

  1. Open Cursor Settings (Cmd/Ctrl + ,)
  2. Search for "MCP" or go to Extensions → MCP
  3. Add server configuration:
{
  "mcpServers": {
    "openai-websearch-mcp": {
      "command": "uvx",
      "args": ["openai-websearch-mcp"],
      "env": {
        "OPENAI_API_KEY": "your-api-key-here",
        "OPENAI_DEFAULT_MODEL": "gpt-5-mini"
      }
    }
  }
}

Claude Code

Claude Code automatically detects MCP servers configured for Claude Desktop. Use the same configuration as above for Claude Desktop.

Local Development

For local testing, use the absolute path to your virtual environment:

{
  "mcpServers": {
    "openai-websearch-mcp": {
      "command": "/path/to/your/project/.venv/bin/python",
      "args": ["-m", "openai_websearch_mcp"],
      "env": {
        "OPENAI_API_KEY": "your-api-key-here",
        "OPENAI_DEFAULT_MODEL": "gpt-5-mini",
        "PYTHONPATH": "/path/to/your/project/src"
      }
    }
  }
}

Available Tools

openai_web_search

Intelligent web search with reasoning model support.

Parameters

Parameter Type Description Default
input string The search query or question to search for Required
model string AI model to use. Supports gpt-4o, gpt-4o-mini, gpt-5, gpt-5-mini, gpt-5-nano, o3, o4-mini gpt-5-mini
reasoning_effort string Reasoning effort level: low, medium, high, minimal Smart default
type string Web search API version web_search_preview
search_context_size string Context amount: low, medium, high medium
user_location object Optional location for localized results null

Usage Examples

Once configured, simply ask your AI assistant to search for information using natural language:

Quick Search

"Search for the latest developments in AI reasoning models using openai_web_search"

Deep Research

"Use openai_web_search with gpt-5 and high reasoning effort to provide a comprehensive analysis of quantum computing breakthroughs"

Localized Search

"Search for local tech meetups in San Francisco this week using openai_web_search"

Model Selection Guide

Quick Multi-Round Searches

  • Recommended: gpt-5-mini with reasoning_effort: "low"
  • Use Case: Fast iterations, real-time information, multiple quick queries
  • Benefits: Lower latency, cost-effective for frequent searches

Deep Research

  • Recommended: gpt-5 with reasoning_effort: "medium" or "high"
  • Use Case: Comprehensive analysis, complex topics, detailed investigation
  • Benefits: Multi-round reasoned results, no need for agent iterations

Model Comparison

Model Reasoning Default Effort Best For
gpt-4o N/A Standard search
gpt-4o-mini N/A Basic queries
gpt-5-mini low Fast iterations
gpt-5 medium Deep research
gpt-5-nano medium Balanced approach
o3 medium Advanced reasoning
o4-mini medium Efficient reasoning

Environment Variables

Variable Description Default
OPENAI_API_KEY Your OpenAI API key Required
OPENAI_DEFAULT_MODEL Default model to use gpt-5-mini

Debugging Tips

Using MCP Inspector

# For uvx installations
npx @modelcontextprotocol/inspector uvx openai-websearch-mcp

# For pip installations
npx @modelcontextprotocol/inspector python -m openai_websearch_mcp

Common Issues

Issue: "Unsupported parameter: 'reasoning.effort'"
Solution: This occurs when using non-reasoning models (gpt-4o, gpt-4o-mini) with reasoning_effort parameter. The server automatically handles this by only applying reasoning parameters to compatible models.

Issue: "No module named 'openai_websearch_mcp'"
Solution: Ensure you've installed the package correctly and your Python path includes the package location.

How to install this MCP server

For Claude Code

To add this MCP server to Claude Code, run this command in your terminal:

claude mcp add-json "openai-websearch-mcp" '{"command":"uvx","args":["openai-websearch-mcp"],"env":{"OPENAI_API_KEY":"your-api-key-here"}}'

See the official Claude Code MCP documentation for more details.

For Cursor

There are two ways to add an MCP server to Cursor. The most common way is to add the server globally in the ~/.cursor/mcp.json file so that it is available in all of your projects.

If you only need the server in a single project, you can add it to the project instead by creating or adding it to the .cursor/mcp.json file.

Adding an MCP server to Cursor globally

To add a global MCP server go to Cursor Settings > Tools & Integrations and click "New MCP Server".

When you click that button the ~/.cursor/mcp.json file will be opened and you can add your server like this:

{
    "mcpServers": {
        "openai-websearch-mcp": {
            "command": "uvx",
            "args": [
                "openai-websearch-mcp"
            ],
            "env": {
                "OPENAI_API_KEY": "your-api-key-here"
            }
        }
    }
}

Adding an MCP server to a project

To add an MCP server to a project you can create a new .cursor/mcp.json file or add it to the existing one. This will look exactly the same as the global MCP server example above.

How to use the MCP server

Once the server is installed, you might need to head back to Settings > MCP and click the refresh button.

The Cursor agent will then be able to see the available tools the added MCP server has available and will call them when it needs to.

You can also explicitly ask the agent to use the tool by mentioning the tool name and describing what the function does.

For Claude Desktop

To add this MCP server to Claude Desktop:

1. Find your configuration file:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
  • Linux: ~/.config/Claude/claude_desktop_config.json

2. Add this to your configuration file:

{
    "mcpServers": {
        "openai-websearch-mcp": {
            "command": "uvx",
            "args": [
                "openai-websearch-mcp"
            ],
            "env": {
                "OPENAI_API_KEY": "your-api-key-here"
            }
        }
    }
}

3. Restart Claude Desktop for the changes to take effect

Want to 10x your AI skills?

Get a free account and learn to code + market your apps using AI (with or without vibes!).

Nah, maybe later