home / mcp / mcts mcp server

MCTS MCP Server

Bayesian MCTS Model Context Protocol Server allowing Claude to control Ollama local models for Advanced MCTS and analysis.

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "angrysky56-mcts-mcp-server": {
      "command": "uv",
      "args": [
        "run",
        "mcts-mcp-server"
      ],
      "env": {
        "GEMINI_API_KEY": "your-gemini-key",
        "OPENAI_API_KEY": "sk-your-openai-key-here",
        "ANTHROPIC_API_KEY": "sk-anthropic-key-here",
        "UV_PROJECT_ENVIRONMENT": "path/to/mcts-mcp-server"
      }
    }
  }
}

You can run a local Monte Carlo Tree Search (MCTS) based MCP server to enable AI-assisted analysis and reasoning. It supports multi-iteration exploration, Bayesian evaluation, and multi-LLM orchestration, so you can systematically explore topics, questions, or text inputs and keep results organized across turns.

How to use

Work with an MCP client to request deep analysis from the MCTS server. You provide a question or prompt, and the server runs multiple MCTS iterations and simulations to explore angles, generate structured analyses, and return the best synthesis. You can control which LLM provider and model to use for the underlying reasoning, start new analyses, resume existing ones, and retrieve reports or insights from completed runs.

How to install

Prerequisites you need before starting:

- Python 3.10+ installed on your system

- Internet connection to fetch dependencies

Follow one of the setup options to install and run the MCP server.

# Option 1: Cross-platform Python setup (Recommended)
# Clone the repository
git clone https://github.com/angrysky56/mcts-mcp-server.git
cd mcts-mcp-server

# Run the setup script
python setup.py
# Option 2: Platform-specific scripts

# Linux/macOS
chmod +x setup.sh
./setup.sh

# Windows
setup_windows.bat

If you prefer manual steps, you can install the required tools, create a Python virtual environment, and install dependencies as shown in the source setup guidance.

Additional configuration and notes

The setup creates a local environment and a configuration file that holds API keys and runtime settings. You will configure your LLM providers (OpenAI, Anthropic, Gemini, etc.) via environment variables and choose a default provider/model if desired.

Available tools

initialize_mcts

Start a new MCTS analysis with a specific question. You can optionally specify provider_name and model_name to override defaults for this run.

run_mcts

Execute the MCTS algorithm for a defined number of iterations and simulations per iteration.

generate_synthesis

Produce a final summary of the MCTS results.

get_config

View current MCTS configuration parameters, including the active LLM provider and model.

update_config

Update MCTS configuration parameters (provider/model changes should be done with set_active_llm).

get_mcts_status

Check the current status and progress of the MCTS system.

set_active_llm

Choose which LLM provider and model to use for MCTS runs.

list_ollama_models

Show available local Ollama models if using the Ollama provider.

list_mcts_runs

List recent MCTS runs with key metadata.

get_mcts_run_details

Get detailed information about a specific MCTS run.

get_mcts_solution

Retrieve the best solution from a given run.

analyze_mcts_run

Perform a comprehensive analysis of a run.

get_mcts_insights

Extract key insights from a run.

extract_mcts_conclusions

Extract conclusions from a run.

suggest_mcts_improvements

Provide suggestions for improving a run.

get_mcts_report

Generate a comprehensive report in various formats (markdown, text, html).

get_best_mcts_runs

Fetch top-performing runs based on score.

compare_mcts_runs

Compare multiple runs to identify similarities and differences.