Home / MCP / Claude LMStudio MCP Server

Claude LMStudio MCP Server

Bridges Claude with local LM Studio models to list models, generate text, and perform chat completions.

python
Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
    "mcpServers": {
        "lmstudio_bridge": {
            "command": "/bin/bash",
            "args": [
                "/path/to/claude-lmstudio-bridge/run_server.sh"
            ],
            "env": {
                "LMSTUDIO_HOST": "127.0.0.1",
                "LMSTUDIO_PORT": "1234",
                "DEBUG": "false"
            }
        }
    }
}

You can bridge Claude with your local LM Studio models using this MCP server. It lets Claude discover locally running models, generate text, handle chat completions, and perform a health check against LM Studio, all through a standard MCP interface that you can control from Claude Desktop.

How to use

You will configure an MCP client in Claude Desktop to point to the local bridge server. Once set up, you can ask Claude to check connectivity, list available models, generate text with a local model, or send a chat completion request to your LM Studio instance. The bridge exposes these capabilities so you can operate your local LLMs directly from Claude.

  • Check the connection to LM Studio to confirm the bridge is reachable.
  • List the models available in your local LM Studio installation.
  • Generate text using a local model to verify generation works with your setup.
  • Send a chat completion query to a local model to verify conversational behavior.

How to install

Follow these steps to set up the bridge and connect it to Claude Desktop. The setup includes using the provided MCP configurations to run the local server process.

# macOS/Linux quick start
git clone https://github.com/infinitimeless/claude-lmstudio-bridge.git
cd claude-lmstudio-bridge

chmod +x setup.sh
./setup.sh

# Follow the setup prompts to complete Claude Desktop configuration
REM Windows quick start
git clone https://github.com/infinitimeless/claude-lmstudio-bridge.git
cd claude-lmstudio-bridge

setup.bat

REM Follow the setup prompts to complete Claude Desktop configuration

If you prefer manual setup, create and activate a virtual environment, install dependencies, and configure Claude Desktop to point at the bridge runner scripts as shown below.

python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

pip install -r requirements.txt

# Configure Claude Desktop MCP server using one of the explicit commands below
# Manual MCP server configuration (macOS/Linux)
Name: lmstudio_bridge_linux
Command: /bin/bash
Arguments: /path/to/claude-lmstudio-bridge/run_server.sh
# Manual MCP server configuration (Windows)
Name: lmstudio_bridge_win
Command: cmd.exe
Arguments: /c C:\path\to\claude-lmstudio-bridge\run_server.bat

Additional content

Configuration details, troubleshooting tips, and optional advanced settings are provided below to ensure a smooth setup and reliable operation.

Troubleshooting

Use the debugging tool to verify connectivity and perform detailed tests against LM Studio.

python debug_lmstudio.py
python debug_lmstudio.py --test-chat --verbose

Common issues and quick fixes include verifying that LM Studio is running, the API server is enabled, and the port matches your environment file. If a model isn’t loaded, start or load a model in LM Studio.

Advanced Configuration

You can customize the bridge behavior by setting environment variables in a local .env file. This is useful for directing the bridge to your LM Studio instance and controlling verbose output during troubleshooting.

LMSTUDIO_HOST=127.0.0.1
LMSTUDIO_PORT=1234
DEBUG=false

Available tools

health_check

Health check endpoint to verify connectivity with LM Studio and confirm that the API server is responding.

list_models

List all available models currently loaded or discoverable in LM Studio.

generate_text

Generate text using a specified local model, returning the produced text.

chat_completion

Submit a chat-style prompt to a local model and retrieve a conversational completion.