home / mcp / mcp llm bridge

MCP LLM Bridge

Bridges MCP tools to OpenAI-compatible LLMs, enabling tool-driven interactions for cloud or local models.

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "bartolli-mcp-llm-bridge": {
      "command": "uvx",
      "args": [
        "mcp-server-sqlite",
        "--db-path",
        "test.db"
      ],
      "env": {
        "OPENAI_MODEL": "gpt-4o",
        "OPENAI_API_KEY": "sk-xxxxxxxx"
      }
    }
  }
}

You set up an MCP LLM Bridge to let OpenAI-compatible language models access and drive MCP-enabled tools through a standardized interface. This bridge translates MCP tool specifications into OpenAI function schemas and maps function invocations back to MCP tool executions, enabling cloud or local LLMs to leverage MCP capabilities with minimal friction.

How to use

You run the bridge and point your OpenAI-compatible model at it to start tool-enabled conversations. After starting, you can ask questions like “What are the most expensive products in the database?” and the model will invoke MCP tools as needed to fetch data or perform actions. You can stop the session by typing quit or pressing Ctrl+C. The bridge supports both remote OpenAI-style endpoints and local OpenAI API-compatible endpoints, so you can work with cloud models or local runtimes.

How to install

Prerequisites: you need Python and the ability to install Python packages, and you should have curl and git available on your system.

# Install
curl -LsSf https://astral.sh/uv/install.sh | sh
git clone https://github.com/bartolli/mcp-llm-bridge.git
cd mcp-llm-bridge
uv venv
source .venv/bin/activate
uv pip install -e .

# Create test database
python -m mcp_llm_bridge.create_test_db

Configuration and usage details

OpenAI as the primary LLM provider is configured by creating an environment file and then starting the bridge with a Python snippet. You can also use additional endpoints that implement the OpenAI API, such as local OpenAI-compatible runtimes.

# Create .env
OPENAI_API_KEY=your_key
OPENAI_MODEL=gpt-4o # or any other OpenAI model that supports tools

To run the bridge with the OpenAI endpoint for the MCP server, configure the runtime in Python as follows. This example uses a local MCP server started via uvx and a test SQLite database.

config = BridgeConfig(
    mcp_server_params=StdioServerParameters(
        command="uvx",
        args=["mcp-server-sqlite", "--db-path", "test.db"],
        env=None
    ),
    llm_config=LLMConfig(
        api_key=os.getenv("OPENAI_API_KEY"),
        model=os.getenv("OPENAI_MODEL", "gpt-4o"),
        base_url=None
    )
)

Commands to run

Start the bridge after configuring the environment. Then interact with it using your preferred OpenAI-compatible model.

python -m mcp_llm_bridge.main

# Try: "What are the most expensive products in the database?"
# Exit with 'quit' or Ctrl+C

Running tests

Install the test dependencies and run the test suite to verify the setup.

uv pip install -e ".[test]" 

python -m pytest -v tests/

Notes on extended endpoints

The bridge also supports endpoints that implement the OpenAI API specification. You can configure models and endpoints that run locally, such as Ollama or LM Studio, by adjusting the LLM settings to point at the local base URL.

# Ollama example
llm_config=LLMConfig(
    api_key="not-needed",
    model="mistral-nemo:12b-instruct-2407-q8_0",
    base_url="http://localhost:11434/v1"
)

# LM Studio example
llm_config=LLMConfig(
    api_key="not-needed",
    model="local-model",
    base_url="http://localhost:1234/v1"
)

Security and maintenance

Keep your OpenAI API key secure, rotate credentials as needed, and manage access to the local MCP components to prevent unauthorized tool executions.