Bayesian MCTS Model Context Protocol Server allowing Claude to control Ollama local models for Advanced MCTS and analysis.
Configuration
View docs{
"mcpServers": {
"angrysky56-mcts-mcp-server": {
"command": "uv",
"args": [
"run",
"mcts-mcp-server"
],
"env": {
"GEMINI_API_KEY": "your-gemini-key",
"OPENAI_API_KEY": "sk-your-openai-key-here",
"ANTHROPIC_API_KEY": "sk-anthropic-key-here",
"UV_PROJECT_ENVIRONMENT": "path/to/mcts-mcp-server"
}
}
}
}You can run a local Monte Carlo Tree Search (MCTS) based MCP server to enable AI-assisted analysis and reasoning. It supports multi-iteration exploration, Bayesian evaluation, and multi-LLM orchestration, so you can systematically explore topics, questions, or text inputs and keep results organized across turns.
Work with an MCP client to request deep analysis from the MCTS server. You provide a question or prompt, and the server runs multiple MCTS iterations and simulations to explore angles, generate structured analyses, and return the best synthesis. You can control which LLM provider and model to use for the underlying reasoning, start new analyses, resume existing ones, and retrieve reports or insights from completed runs.
Prerequisites you need before starting:
- Python 3.10+ installed on your system
- Internet connection to fetch dependencies
Follow one of the setup options to install and run the MCP server.
# Option 1: Cross-platform Python setup (Recommended)
# Clone the repository
git clone https://github.com/angrysky56/mcts-mcp-server.git
cd mcts-mcp-server
# Run the setup script
python setup.py# Option 2: Platform-specific scripts
# Linux/macOS
chmod +x setup.sh
./setup.sh
# Windows
setup_windows.batIf you prefer manual steps, you can install the required tools, create a Python virtual environment, and install dependencies as shown in the source setup guidance.
The setup creates a local environment and a configuration file that holds API keys and runtime settings. You will configure your LLM providers (OpenAI, Anthropic, Gemini, etc.) via environment variables and choose a default provider/model if desired.
Start a new MCTS analysis with a specific question. You can optionally specify provider_name and model_name to override defaults for this run.
Execute the MCTS algorithm for a defined number of iterations and simulations per iteration.
Produce a final summary of the MCTS results.
View current MCTS configuration parameters, including the active LLM provider and model.
Update MCTS configuration parameters (provider/model changes should be done with set_active_llm).
Check the current status and progress of the MCTS system.
Choose which LLM provider and model to use for MCTS runs.
Show available local Ollama models if using the Ollama provider.
List recent MCTS runs with key metadata.
Get detailed information about a specific MCTS run.
Retrieve the best solution from a given run.
Perform a comprehensive analysis of a run.
Extract key insights from a run.
Extract conclusions from a run.
Provide suggestions for improving a run.
Generate a comprehensive report in various formats (markdown, text, html).
Fetch top-performing runs based on score.
Compare multiple runs to identify similarities and differences.