The Ultimate MCP Server provides a comprehensive framework for AI agents to access dozens of powerful capabilities through the Model Context Protocol (MCP). This server acts as a complete AI agent operating system, enabling advanced AI agents to leverage tools for cognitive augmentation, tool use, and intelligent orchestration.
To install the Ultimate MCP Server:
# Install uv (fast Python package manager) if you don't have it:
curl -LsSf https://astral.sh/uv/install.sh | sh
# Clone the repository
git clone https://github.com/Dicklesworthstone/ultimate_mcp_server.git
cd ultimate_mcp_server
# Create a virtual environment and install dependencies using uv:
uv venv --python 3.13
source .venv/bin/activate
uv lock --upgrade
uv sync --all-extras
Create a .env
file in the root directory with your API keys and configuration:
# --- API Keys (at least one provider required) ---
OPENAI_API_KEY=your_openai_sk-...
ANTHROPIC_API_KEY=your_anthropic_sk-...
GEMINI_API_KEY=your_google_ai_studio_key...
# DEEPSEEK_API_KEY=your_deepseek_key...
# OPENROUTER_API_KEY=your_openrouter_key...
# GROK_API_KEY=your_grok_key...
# --- Server Configuration (Defaults shown) ---
GATEWAY_SERVER_PORT=8013
GATEWAY_SERVER_HOST=127.0.0.1
# --- Logging Configuration (Defaults shown) ---
LOG_LEVEL=INFO
USE_RICH_LOGGING=true
# --- Cache Configuration (Defaults shown) ---
GATEWAY_CACHE_ENABLED=true
GATEWAY_CACHE_TTL=86400
Start the MCP server with the following command:
# Start the server with all registered tools
umcp run
# Start with specific tools only
umcp run --include-tools completion chunk_document read_file write_file
# Start excluding specific tools
umcp run --exclude-tools browser_init browser_navigate
# Start with Docker (ensure .env file exists)
docker compose up --build
The server can be accessed via any MCP-compatible client. Here are some examples of how to use its capabilities:
import asyncio
from mcp.client import Client
async def basic_completion_example():
client = Client("http://localhost:8013")
response = await client.tools.completion(
prompt="Write a short poem about a robot learning to dream.",
provider="openai",
model="gpt-4.1-mini",
max_tokens=100,
temperature=0.7
)
print(f"Completion: {response['completion']}")
await client.close()
The server enables AI agents to process large documents efficiently by delegating chunking and summarization to appropriate models:
import asyncio
from mcp.client import Client
async def document_analysis_example():
client = Client("http://localhost:8013")
document = "... large document content ..."
# First, chunk the document
chunks_response = await client.tools.chunk_document(
document=document,
chunk_size=1000,
overlap=100,
method="semantic"
)
# Then summarize each chunk using a cost-effective model
summaries = []
for chunk in chunks_response["chunks"]:
summary_response = await client.tools.summarize_document(
document=chunk,
provider="gemini",
model="gemini-2.0-flash-lite",
format="paragraph",
max_length=150
)
summaries.append(summary_response["summary"])
# Final synthesis can be performed by the agent
await client.close()
The server provides tools for web browsing and research:
import asyncio
from mcp.client import Client
async def browser_research_example():
client = Client("http://localhost:8013")
# Initialize browser
init_response = await client.tools.browser_init(headless=True)
# Navigate to a site
nav_response = await client.tools.browser_navigate(
url="https://example.com/",
wait_until="domcontentloaded"
)
# Extract text content
text_response = await client.tools.browser_get_text(selector="h1")
print(f"Extracted text: '{text_response['text']}'")
# Close browser
await client.tools.browser_close()
await client.close()
The Ultimate MCP Server includes a powerful CLI:
# Display version information
umcp --version
# Start the server
umcp run
# List available providers
umcp providers
# Check API keys for all configured providers
umcp providers -c
# Test a specific provider
umcp test anthropic --model claude-3-5-haiku-20241022 --prompt "Write a short poem about coding."
# Generate text directly from the CLI
umcp complete --provider openai --prompt "What are quantum computers?"
# View cache status
umcp cache
# Clear the cache
umcp cache --clear
# Run provider benchmarks
umcp benchmark --providers openai,anthropic
# List available tools
umcp tools
# List tools in a specific category
umcp tools --category document
One of the key benefits of the Ultimate MCP Server is significant cost reduction through intelligent delegation:
This can result in 70-90% cost savings compared to using high-end models for all tasks.
For more detailed information about specific tools and capabilities, refer to the source code and documentation within the project.
There are two ways to add an MCP server to Cursor. The most common way is to add the server globally in the ~/.cursor/mcp.json
file so that it is available in all of your projects.
If you only need the server in a single project, you can add it to the project instead by creating or adding it to the .cursor/mcp.json
file.
To add a global MCP server go to Cursor Settings > MCP and click "Add new global MCP server".
When you click that button the ~/.cursor/mcp.json
file will be opened and you can add your server like this:
{
"mcpServers": {
"cursor-rules-mcp": {
"command": "npx",
"args": [
"-y",
"cursor-rules-mcp"
]
}
}
}
To add an MCP server to a project you can create a new .cursor/mcp.json
file or add it to the existing one. This will look exactly the same as the global MCP server example above.
Once the server is installed, you might need to head back to Settings > MCP and click the refresh button.
The Cursor agent will then be able to see the available tools the added MCP server has available and will call them when it needs to.
You can also explictly ask the agent to use the tool by mentioning the tool name and describing what the function does.