LiteLLM MCP server

Integrates with LiteLLM to provide access to OpenAI language models for text completion and generation tasks across various applications.
Back to servers
Provider
Darian Ngo
Release date
Dec 09, 2024
Language
Python
Stats
5 stars

This MCP server integrates LiteLLM to handle text completions using OpenAI models, providing a standardized way to interact with Large Language Models through the Model Context Protocol.

Installation

You can install the MCP server with a simple pip command:

pip install mcp-server-litellm

Usage

Starting the Server

To start the MCP server, use the following command:

mcp-litellm --host 127.0.0.1 --port 8080

This will launch the server on localhost port 8080. You can modify these parameters as needed for your specific setup.

Configuration

The server can be configured using environment variables or command-line options:

# Set OpenAI API key
export OPENAI_API_KEY=your_api_key_here

# Then start the server
mcp-litellm --model gpt-3.5-turbo

Command-Line Options

The server supports several command-line options:

  • --host: Host address to bind to (default: 127.0.0.1)
  • --port: Port to listen on (default: 8080)
  • --model: Default model to use (default: gpt-3.5-turbo)
  • --log-level: Set logging level (default: info)

Making Requests

Once the server is running, you can make requests to it using any HTTP client:

import requests
import json

url = "http://127.0.0.1:8080/v1/completions"
headers = {"Content-Type": "application/json"}
data = {
    "prompt": "Hello, world!",
    "max_tokens": 100
}

response = requests.post(url, headers=headers, json=data)
result = response.json()
print(result["choices"][0]["text"])

Advanced Configuration

For production deployments, you may want to set additional parameters:

mcp-litellm --host 0.0.0.0 --port 8080 --model gpt-4 --log-level debug

Troubleshooting

Common Issues

  • API Key Issues: Ensure your OpenAI API key is correctly set
  • Connection Errors: Verify the server is running and accessible
  • Model Availability: Confirm you have access to the requested model

Logs

To diagnose problems, check the server logs by running with increased verbosity:

mcp-litellm --log-level debug

How to add this MCP server to Cursor

There are two ways to add an MCP server to Cursor. The most common way is to add the server globally in the ~/.cursor/mcp.json file so that it is available in all of your projects.

If you only need the server in a single project, you can add it to the project instead by creating or adding it to the .cursor/mcp.json file.

Adding an MCP server to Cursor globally

To add a global MCP server go to Cursor Settings > MCP and click "Add new global MCP server".

When you click that button the ~/.cursor/mcp.json file will be opened and you can add your server like this:

{
    "mcpServers": {
        "cursor-rules-mcp": {
            "command": "npx",
            "args": [
                "-y",
                "cursor-rules-mcp"
            ]
        }
    }
}

Adding an MCP server to a project

To add an MCP server to a project you can create a new .cursor/mcp.json file or add it to the existing one. This will look exactly the same as the global MCP server example above.

How to use the MCP server

Once the server is installed, you might need to head back to Settings > MCP and click the refresh button.

The Cursor agent will then be able to see the available tools the added MCP server has available and will call them when it needs to.

You can also explictly ask the agent to use the tool by mentioning the tool name and describing what the function does.

Want to 10x your AI skills?

Get a free account and learn to code + market your apps using AI (with or without vibes!).

Nah, maybe later