home / mcp / ultimate mcp server

Ultimate MCP Server

Comprehensive MCP server exposing dozens of capabilities to AI agents: multi-provider LLM delegation, browser automation, document processing, vector ops, and cognitive memory systems

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "dicklesworthstone-ultimate_mcp_server": {
      "url": "https://example-mcp.example.com/mcp",
      "headers": {
        "GROK_API_KEY": "key-...",
        "GEMINI_API_KEY": "AIza...",
        "OPENAI_API_KEY": "sk-...",
        "DEEPSEEK_API_KEY": "key-...",
        "ANTHROPIC_API_KEY": "sk-...",
        "OPENROUTER_API_KEY": "key-...",
        "GOOGLE_APPLICATION_CREDENTIALS": "/path/to/key.json"
      }
    }
  }
}

You deploy and run a comprehensive AI agent operating system that exposes dozens of capabilities through the Model Context Protocol, enabling autonomous AI agents to access tools, memory, browsing, data processing, and cross-provider model orchestration in a unified server. It empowers agents to perform complex, multi-step workflows across web, documents, databases, filesystems, and more while optimizing cost, performance, and quality.

How to use

You interact with the Ultimate MCP Server by running it locally or in a container and connecting an MCP client or agent to its endpoint. The server exposes a suite of tools and services that an agent can call through the MCP protocol. You can route tasks to different providers, cache results, process documents, browse the web, manipulate spreadsheets, manage memories, and orchestrate multi-step workflows. Use the server to empower AI agents to complete complex projects autonomously while leveraging cost-aware routing and memory persistence.

How to install

Prerequisites: a supported Python environment and a tool to manage Python dependencies. You will clone the repository, set up a virtual environment, install dependencies, and install optional extras as needed.

# Install uv (fast Python package manager) if you don't have it:
curl -LsSf https://astral.sh/uv/install.sh | sh

# Clone the repository
git clone https://github.com/Dicklesworthstone/ultimate_mcp_server.git
cd ultimate_mcp_server

# Create a virtual environment and install dependencies using uv:
uv venv --python 3.13
source .venv/bin/activate
uv lock --upgrade
uv sync --all-extras
```

Run the server

Activate your virtual environment and start the server with the desired tools. You can start with all registered tools or limit to a subset. You can also run with Docker if you prefer containerization.

# Start the MCP server with all registered tools found
umcp run

# Start the server including only specific tools
umcp run --include-tools completion chunk_document read_file write_file

# Start the server excluding specific tools
umcp run --exclude-tools browser_init browser_navigate research_and_synthesize_report

# Start with Docker (ensure .env file exists in the project root or pass environment variables)
docker compose up --build

Configuration basics

Configure the server using environment variables stored in a .env file or provided at runtime. You can set API keys for providers, server host and port, logging preferences, caching options, provider timeouts, and tool-specific settings. The server exposes a health endpoint and a CLI for management. Ensure the server host is restricted by default and expose it behind a reverse proxy for production.

Notes on security and best practices

Secure your setup by keeping API keys out of code, using a reverse proxy with TLS, restricting access to trusted networks, and enabling rate limiting. Validate all inputs to tools, especially filesystem, SQL, and browser actions. Regularly update dependencies and monitor logs. Use memory and cache wisely to balance cost and performance.

Examples and usage patterns

You can leverage a wide range of tools in practical workflows: browse and gather data, chunk and summarize documents, extract structured data, run SQL queries, manipulate Excel workbooks, transcribe audio, perform OCR, and orchestrate multi-step pipelines. The server is designed to enable cost-aware delegation across providers and to persist agent memory across tasks.

Autonomous Documentation Refiner

The server includes an autonomous refiner that analyzes, tests, and refines tool documentation to reduce ambiguities for agents. It can generate tests, validate schemas, simulate usage, and propose targeted improvements to descriptions, schemas, and examples.

Getting help and troubleshooting

If you encounter issues, check logging output, ensure environment variables are loaded, verify that the server is reachable at the configured host and port, and confirm that the required tools are registered. For persistent problems, review the health endpoint and consult the available examples to confirm correct tool usage.

Additional considerations

For production deployments, consider running the server as a background service, placing a reverse proxy in front, and using orchestration for scalability. Monitor resource usage (RAM, CPU, I/O) and adjust worker counts and caching configurations accordingly.

Available tools

completion

Direct text generation via a provider/model with customizable temperature and token limits

chunk_document

Split a large document into manageable chunks with overlap for processing and summarization

read_file

Read a file from the server filesystem for processing by MCP tools

write_file

Write content to a file on the server filesystem as part of a workflow

browser_init

Initialize a browser context for Playwright-based automation

browser_navigate

Navigate to a URL within the browser context and wait for a condition

research_and_synthesize_report

Orchestrate browser searches and synthesis of a research report from multiple sources

summarize_document

Summarize text or document chunks using a selected provider/model

extract_entities

Extract named entities from text using a model and optional provider choice

extract_json

Extract structured JSON data from text following a provided schema

excel_execute

Create or modify Excel workbooks with instructions and formatting options

store_memory

Persist cognitive memories with metadata for later retrieval

hybrid_search_memories

Perform hybrid (semantic+keyword) search over stored memories

generate_reflection

Generate reflections or insights from stored memories and context

create_workflow

Create and manage a workflow context with multiple steps and dependencies

record_action_start

Log the start of an agent action within a workflow

record_action_end

Mark a workflow action as completed with outcome details

rag_query

Retrieve relevant documents and generate grounded answers using RAG

fused_search

Perform a hybrid search over indexed content with keyword and semantic signals

multi_completion

Request and compare completions from multiple providers/models

extract_text_from_pdf

Extract text from PDFs with OCR enhancement if needed

process_image_ocr

Run OCR on images and return extracted text with optional processing

browser_get_text

Extract text from a DOM selector in the current browser page

browser_screenshot

Capture a screenshot of the current browser page to a server path

register_api

Dynamically register an external API as MCP tools via OpenAPI specs

call_dynamic_tool

Call a tool generated from a dynamically registered API endpoint

unregister_api

Remove a previously registered dynamic API integration

list_tools

List available MCP tools and their schemas