home / mcp / mcp-bridge mcp server

MCP-Bridge MCP Server

Bridges MCP tools with OpenAI API via MCP-Bridge enabling tool calls and SSE for external clients.

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "secretiveshell-mcp-bridge": {
      "command": "uvx",
      "args": [
        "mcp-server-fetch"
      ],
      "env": {
        "MCP_BRIDGE__CONFIG__FILE": "config.json",
        "MCP_BRIDGE__CONFIG__JSON": "{\"inference_server\":{...}}",
        "MCP_BRIDGE_SECURITY_API_KEYS": "YOUR_API_KEY_HERE",
        "MCP_BRIDGE__CONFIG__HTTP_URL": "http://host:8888/config.json"
      }
    }
  }
}

MCP-Bridge acts as a bridge between the OpenAI API and MCP tools, allowing you to leverage MCP capabilities through the OpenAI interface. It enables tool-driven interactions and provides a standard way to test MCP tool availability and responses from your client apps.

How to use

You can use MCP-Bridge with an MCP client to access and orchestrate MCP tools through the familiar OpenAI API workflow. Start by configuring your MCP server as a local stdio backend or point your client to the available endpoints. When you send a request to the bridge, it will enumerate MCP tools, forward tool calls to the MCP server, and return the tool results to your client so the language model can generate a final response. Use the built-in SSE bridge for external applications that support server-sent events, and test your setup with an MCP-capable client to ensure the tool definitions and results flow correctly.

How to install

Prerequisites you need before installing MCP-Bridge: Docker (recommended) or a Python/UV-based setup for non-Docker use. You also need an inference engine with tool call support.

Additional configuration and usage notes

Configuration is done via a config.json file. You can enable API key authentication to secure your server. If authentication is enabled, include the API key in the Authorization header as a Bearer token. The bridge supports multiple MCP servers and can be started with a local config or via a HTTP URL to fetch the config.json. You can test the MCP tool list and tool calls through the provided REST endpoints and the SSE bridge at /mcp-server/sse.

Configuration details

To add new MCP servers, edit the config.json file. A typical setup includes an inference server section and an MCP server section for each tool. Example server definitions show how to run an MCP fetch tool locally using a streaming-capable bridge.

Security

Enable API key authentication to restrict access. Add an API key to the security section of your config.json. When enabled, requests must include a Bearer token in the Authorization header.

Examples and testing

Use a test client to list available MCP tools and to simulate tool calls. The SSE bridge lets you observe real-time tool listings and responses, which helps verify your configuration is correct before integrating with a client application.

Troubleshooting

If you encounter authentication issues, verify the API key is present in the Authorization header and that the key in your config.json matches your client configuration. If tool calls fail, check that the MCP server command and arguments are correct and that the MCP server is reachable from the bridge host.

Available tools

MCP Tools

Expose MCP primitive tools and endpoints so that you can discover and invoke them through MCP-Bridge and the OpenAI API.

MCP Sampling

Support for sampling options to control model/tool usage behavior within MCP calls.

SSE Bridge

Provide a server-sent events endpoint for external clients to subscribe to MCP tool updates and results.