Bridges MCP tools to OpenAI-compatible LLMs, enabling tool-driven interactions for cloud or local models.
Configuration
View docs{
"mcpServers": {
"bartolli-mcp-llm-bridge": {
"command": "uvx",
"args": [
"mcp-server-sqlite",
"--db-path",
"test.db"
],
"env": {
"OPENAI_MODEL": "gpt-4o",
"OPENAI_API_KEY": "sk-xxxxxxxx"
}
}
}
}You set up an MCP LLM Bridge to let OpenAI-compatible language models access and drive MCP-enabled tools through a standardized interface. This bridge translates MCP tool specifications into OpenAI function schemas and maps function invocations back to MCP tool executions, enabling cloud or local LLMs to leverage MCP capabilities with minimal friction.
You run the bridge and point your OpenAI-compatible model at it to start tool-enabled conversations. After starting, you can ask questions like “What are the most expensive products in the database?” and the model will invoke MCP tools as needed to fetch data or perform actions. You can stop the session by typing quit or pressing Ctrl+C. The bridge supports both remote OpenAI-style endpoints and local OpenAI API-compatible endpoints, so you can work with cloud models or local runtimes.
Prerequisites: you need Python and the ability to install Python packages, and you should have curl and git available on your system.
# Install
curl -LsSf https://astral.sh/uv/install.sh | sh
git clone https://github.com/bartolli/mcp-llm-bridge.git
cd mcp-llm-bridge
uv venv
source .venv/bin/activate
uv pip install -e .
# Create test database
python -m mcp_llm_bridge.create_test_dbOpenAI as the primary LLM provider is configured by creating an environment file and then starting the bridge with a Python snippet. You can also use additional endpoints that implement the OpenAI API, such as local OpenAI-compatible runtimes.
# Create .env
OPENAI_API_KEY=your_key
OPENAI_MODEL=gpt-4o # or any other OpenAI model that supports toolsTo run the bridge with the OpenAI endpoint for the MCP server, configure the runtime in Python as follows. This example uses a local MCP server started via uvx and a test SQLite database.
config = BridgeConfig(
mcp_server_params=StdioServerParameters(
command="uvx",
args=["mcp-server-sqlite", "--db-path", "test.db"],
env=None
),
llm_config=LLMConfig(
api_key=os.getenv("OPENAI_API_KEY"),
model=os.getenv("OPENAI_MODEL", "gpt-4o"),
base_url=None
)
)Start the bridge after configuring the environment. Then interact with it using your preferred OpenAI-compatible model.
python -m mcp_llm_bridge.main
# Try: "What are the most expensive products in the database?"
# Exit with 'quit' or Ctrl+CInstall the test dependencies and run the test suite to verify the setup.
uv pip install -e ".[test]"
python -m pytest -v tests/The bridge also supports endpoints that implement the OpenAI API specification. You can configure models and endpoints that run locally, such as Ollama or LM Studio, by adjusting the LLM settings to point at the local base URL.
# Ollama example
llm_config=LLMConfig(
api_key="not-needed",
model="mistral-nemo:12b-instruct-2407-q8_0",
base_url="http://localhost:11434/v1"
)
# LM Studio example
llm_config=LLMConfig(
api_key="not-needed",
model="local-model",
base_url="http://localhost:1234/v1"
)Keep your OpenAI API key secure, rotate credentials as needed, and manage access to the local MCP components to prevent unauthorized tool executions.