Zen MCP Server is a powerful tool that orchestrates multiple AI models to create a cohesive development experience. It integrates with Claude and Gemini CLI, allowing you to leverage various AI models for enhanced code analysis, problem-solving, and collaborative development.
You need at least one of the following API keys:
# Clone the repository
git clone https://github.com/BeehiveInnovations/zen-mcp-server.git
cd zen-mcp-server
# Run the setup script
./run-server.sh
The setup script handles everything automatically: creating a Python environment, installing dependencies, and configuring Claude integrations.
Edit the .env
file to add your API keys:
nano .env
Add at least one of the following:
GEMINI_API_KEY=your-gemini-api-key-here
OPENAI_API_KEY=your-openai-api-key-here
OPENROUTER_API_KEY=your-openrouter-key
DIAL_API_KEY=your-dial-api-key-here
For local models (like Ollama):
CUSTOM_API_URL=http://localhost:11434/v1
CUSTOM_API_KEY=
CUSTOM_MODEL_NAME=llama3.2
No restart is needed as the server reads the .env
file each time it's called.
After installation, you can use the MCP server with Claude by simply asking it to use Zen tools in your prompts:
Think deeper about this architecture design with zen
Or specify a particular tool:
Using zen perform a code review of this code for security issues
Or specify a particular model:
Use flash to suggest how to format this code based on the specs mentioned in policy.md
# Code review
Perform a codereview with gemini pro especially the auth.py as I feel some code is bypassing security checks
# Debugging
See logs under /path/to/diagnostics.log and related code. Using zen's debug tool with gemini pro, find the root cause
# Planning
Use planner to break down the microservices migration project into manageable steps
# Analysis
Use gemini to analyze main.py to understand how it works
# Documentation
Use docgen to add comprehensive documentation to the UserManager class
You can also use structured prompts:
/zen:chat ask local-llama what 2 + 2 is
/zen:thinkdeep use o3 and tell me why the code isn't working in sorting.swift
/zen:codereview review for security module ABC
One of the most powerful features is AI-to-AI conversation threading, allowing Claude to orchestrate conversations between multiple AI models while maintaining context across the entire workflow. This enables true AI collaboration where models can build on each other's insights.
Context revival is another remarkable capability - even after Claude's context resets, the MCP server maintains conversation history, allowing discussions to continue seamlessly across sessions.
There are two ways to add an MCP server to Cursor. The most common way is to add the server globally in the ~/.cursor/mcp.json
file so that it is available in all of your projects.
If you only need the server in a single project, you can add it to the project instead by creating or adding it to the .cursor/mcp.json
file.
To add a global MCP server go to Cursor Settings > MCP and click "Add new global MCP server".
When you click that button the ~/.cursor/mcp.json
file will be opened and you can add your server like this:
{
"mcpServers": {
"cursor-rules-mcp": {
"command": "npx",
"args": [
"-y",
"cursor-rules-mcp"
]
}
}
}
To add an MCP server to a project you can create a new .cursor/mcp.json
file or add it to the existing one. This will look exactly the same as the global MCP server example above.
Once the server is installed, you might need to head back to Settings > MCP and click the refresh button.
The Cursor agent will then be able to see the available tools the added MCP server has available and will call them when it needs to.
You can also explictly ask the agent to use the tool by mentioning the tool name and describing what the function does.