home / mcp / gemini mcp server

Gemini MCP Server

Bridges Gemini CLI with MCP clients to enable 33 tools, 400+ AI models, and OpenRouter-powered workflows.

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "centminmod-gemini-cli-mcp-server": {
      "command": "python",
      "args": [
        "mcp_server.py"
      ],
      "env": {
        "GEMINI_API_KEY": "YOUR_GEMINI_API_KEY",
        "GEMINI_TIMEOUT": "300",
        "GEMINI_LOG_LEVEL": "INFO",
        "OPENROUTER_API_KEY": "sk-xxxxxxxxxxxx",
        "GEMINI_COMMAND_PATH": "/usr/local/bin/gemini"
      }
    }
  }
}

The Gemini CLI MCP Server lets you connect Google’s Gemini CLI with MCP-compatible clients to enable cross-AI workflows, tool orchestration, and multi-model collaboration. It provides a production-ready bridge that routes prompts, manages conversations, and coordinates 400+ AI models through an OpenRouter integration for rich, enterprise-grade AI capabilities.

How to use

You use the Gemini CLI MCP Server by configuring an MCP client to talk to the server and then issuing prompts through your client of choice. The server exposes a suite of specialized MCP tools that you can invoke from Claude Code, Claude Desktop, or other MCP-compatible clients. Your prompts can be processed by Gemini CLI models, OpenRouter models, or cross-model collaborations across multiple AI providers.

In typical workflows you can perform plan evaluation, code review, and multi-AI collaboration. The server coordinates the selected models, enforces token limits, and provides a structured workflow that helps you compare results, aggregate insights, and produce consolidated reports.

How to install

Prerequisites: You need Python 3.10 or higher and Node.js for Gemini CLI. A modern Linux, macOS, or Windows environment is supported.

# Prerequisites: install uv (recommended)
curl -LsSf https://astral.sh/uv/install.sh | sh

# Clone the MCP server project
git clone https://github.com/centminmod/gemini-cli-mcp-server.git
cd gemini-cli-mcp-server

# Create and activate a Python virtual environment
uv venv
source .venv/bin/activate

# Install Python dependencies
uv pip install -r requirements.txt

# Install Gemini CLI globally
npm install -g @google-ai/gemini-cli

# Configure Gemini API key (replace with your key)
gemini config set api_key YOUR_GEMINI_API_KEY

# Verify installations
gemini --version
python mcp_server.py --help

Configuration and startup notes

The server expects you to provide the Gemini CLI path and the API key. Typical startup involves running the MCP server file directly in a Python virtual environment and then using your MCP clients to connect.

If you plan to enable OpenRouter and multiple model providers, you will also configure an API key for OpenRouter and select a default model. This lets you access 400+ AI models and manage model fallbacks automatically when quotas are reached.

Troubleshooting

If you encounter issues starting the server, verify that the Python environment is active and that the Gemini CLI is accessible from your PATH. Check that the API key is correctly configured and that the required ports are not blocked by your firewall.

For common client issues, ensure you are using absolute paths in your MCP client configuration and that the client can reach the MCP server endpoint. Review server logs for error messages and use the built-in metrics tool to understand performance and usage.

Advanced usage patterns

Leverage the 33 MCP tools to implement complex workflows such as multi-AI collaboration, content analysis, and structured code reviews. Use OpenRouter to compare responses across Gemini CLI models and OpenRouter providers, and enable conversation history to maintain multi-turn context across interactions.

Available tools

gemini_cli

Execute Gemini CLI commands with error handling and structured results.

gemini_help

Get cached Gemini CLI help information (30-minute TTL).

gemini_version

Get cached Gemini CLI version information (30-minute TTL).

gemini_prompt

Send prompts with structured parameters and validation (100,000 char limit).

gemini_models

List all available Gemini AI models.

gemini_metrics

Get server performance metrics and statistics.

gemini_sandbox

Execute prompts in sandbox mode for code execution (200,000 char limit).

gemini_cache_stats

Get cache statistics for all cache backends.

gemini_rate_limiting_stats

Get rate limiting and quota statistics.

gemini_summarize

Summarize content with focus-specific analysis (400,000 char limit).

gemini_summarize_files

File-based summarization using @filename syntax (800,000 char limit).

gemini_eval_plan

Evaluate implementation plans for code or architecture (500,000 char limit).

gemini_review_code

Review code with detailed analysis (300,000 char limit).

gemini_verify_solution

Comprehensive verification of complete solutions (800,000 char limit).

gemini_start_conversation

Start a new stateful conversation with an ID.

gemini_continue_conversation

Continue an existing conversation with context history.

gemini_list_conversations

List active conversations with metadata.

gemini_clear_conversation

Clear or delete a specific conversation.

gemini_conversation_stats

Get conversation system statistics and health.

gemini_code_review

Structured code analysis with a focus on maintainability, security, and quality (NEW).

gemini_extract_structured

Schema-based data extraction from content (NEW).

gemini_git_diff_review

Analyze git diffs with contextual feedback (NEW).

gemini_content_comparison

Advanced multi-source content comparison and analysis (NEW).

gemini_ai_collaboration

Coordinate multi-model collaboration including debates and validations.

gemini_test_openrouter

Test OpenRouter connectivity and client functionality.

gemini_openrouter_opinion

Get responses from 400+ models via OpenRouter with file support.

gemini_openrouter_models

List all available OpenRouter models with filters.

gemini_cross_model_comparison

Compare responses across Gemini CLI and OpenRouter models.

gemini_openrouter_usage_stats

OpenRouter usage statistics and costs for the session.