Gemini MCP server

Integrates Google Gemini 2.5 Pro as a development partner for extended reasoning, code analysis, and collaborative problem-solving with specialized tools for deep thinking, code review, debugging, file analysis, and brainstorming through a 1M token context window.
Back to servers
Provider
Fahad Gilani
Release date
Jun 09, 2025
Language
JavaScript
Stats
3.2K stars

Zen MCP Server is a powerful tool that orchestrates multiple AI models to create a cohesive development experience. It integrates with Claude and Gemini CLI, allowing you to leverage various AI models for enhanced code analysis, problem-solving, and collaborative development.

Installation

Prerequisites

  • Python 3.10+ (3.12 recommended)
  • Git
  • WSL2 for Windows users

Set Up API Keys

You need at least one of the following API keys:

Installation Steps

# Clone the repository
git clone https://github.com/BeehiveInnovations/zen-mcp-server.git
cd zen-mcp-server

# Run the setup script
./run-server.sh

The setup script handles everything automatically: creating a Python environment, installing dependencies, and configuring Claude integrations.

Configure API Keys

Edit the .env file to add your API keys:

nano .env

Add at least one of the following:

GEMINI_API_KEY=your-gemini-api-key-here
OPENAI_API_KEY=your-openai-api-key-here
OPENROUTER_API_KEY=your-openrouter-key
DIAL_API_KEY=your-dial-api-key-here

For local models (like Ollama):

CUSTOM_API_URL=http://localhost:11434/v1
CUSTOM_API_KEY=
CUSTOM_MODEL_NAME=llama3.2

No restart is needed as the server reads the .env file each time it's called.

Usage

After installation, you can use the MCP server with Claude by simply asking it to use Zen tools in your prompts:

Think deeper about this architecture design with zen

Or specify a particular tool:

Using zen perform a code review of this code for security issues

Or specify a particular model:

Use flash to suggest how to format this code based on the specs mentioned in policy.md

Available Tools

  1. chat - Collaborative thinking partner for brainstorming and validation
  2. thinkdeep - Extended reasoning for complex problems
  3. planner - Break down complex projects into step-by-step plans
  4. consensus - Get diverse expert opinions from multiple AI models
  5. codereview - Comprehensive code analysis with prioritized feedback
  6. precommit - Validate git changes before committing
  7. debug - Systematic investigation and debugging assistance
  8. analyze - Code understanding and architectural exploration
  9. refactor - Intelligent code refactoring with decomposition
  10. tracer - Call-flow mapping and dependency tracing
  11. testgen - Comprehensive test generation with edge cases
  12. secaudit - Security audit with OWASP analysis
  13. docgen - Documentation generation with complexity analysis
  14. listmodels - Display all available AI models
  15. version - Get server version information

Tool Usage Examples

# Code review
Perform a codereview with gemini pro especially the auth.py as I feel some code is bypassing security checks

# Debugging
See logs under /path/to/diagnostics.log and related code. Using zen's debug tool with gemini pro, find the root cause

# Planning
Use planner to break down the microservices migration project into manageable steps

# Analysis
Use gemini to analyze main.py to understand how it works

# Documentation
Use docgen to add comprehensive documentation to the UserManager class

You can also use structured prompts:

/zen:chat ask local-llama what 2 + 2 is
/zen:thinkdeep use o3 and tell me why the code isn't working in sorting.swift
/zen:codereview review for security module ABC

Advanced Features

One of the most powerful features is AI-to-AI conversation threading, allowing Claude to orchestrate conversations between multiple AI models while maintaining context across the entire workflow. This enables true AI collaboration where models can build on each other's insights.

Context revival is another remarkable capability - even after Claude's context resets, the MCP server maintains conversation history, allowing discussions to continue seamlessly across sessions.

How to add this MCP server to Cursor

There are two ways to add an MCP server to Cursor. The most common way is to add the server globally in the ~/.cursor/mcp.json file so that it is available in all of your projects.

If you only need the server in a single project, you can add it to the project instead by creating or adding it to the .cursor/mcp.json file.

Adding an MCP server to Cursor globally

To add a global MCP server go to Cursor Settings > MCP and click "Add new global MCP server".

When you click that button the ~/.cursor/mcp.json file will be opened and you can add your server like this:

{
    "mcpServers": {
        "cursor-rules-mcp": {
            "command": "npx",
            "args": [
                "-y",
                "cursor-rules-mcp"
            ]
        }
    }
}

Adding an MCP server to a project

To add an MCP server to a project you can create a new .cursor/mcp.json file or add it to the existing one. This will look exactly the same as the global MCP server example above.

How to use the MCP server

Once the server is installed, you might need to head back to Settings > MCP and click the refresh button.

The Cursor agent will then be able to see the available tools the added MCP server has available and will call them when it needs to.

You can also explictly ask the agent to use the tool by mentioning the tool name and describing what the function does.

Want to 10x your AI skills?

Get a free account and learn to code + market your apps using AI (with or without vibes!).

Nah, maybe later