home / mcp / gemini mcp server
Bridges Gemini CLI with MCP clients to enable 33 tools, 400+ AI models, and OpenRouter-powered workflows.
Configuration
View docs{
"mcpServers": {
"centminmod-gemini-cli-mcp-server": {
"command": "python",
"args": [
"mcp_server.py"
],
"env": {
"GEMINI_API_KEY": "YOUR_GEMINI_API_KEY",
"GEMINI_TIMEOUT": "300",
"GEMINI_LOG_LEVEL": "INFO",
"OPENROUTER_API_KEY": "sk-xxxxxxxxxxxx",
"GEMINI_COMMAND_PATH": "/usr/local/bin/gemini"
}
}
}
}The Gemini CLI MCP Server lets you connect Googleβs Gemini CLI with MCP-compatible clients to enable cross-AI workflows, tool orchestration, and multi-model collaboration. It provides a production-ready bridge that routes prompts, manages conversations, and coordinates 400+ AI models through an OpenRouter integration for rich, enterprise-grade AI capabilities.
You use the Gemini CLI MCP Server by configuring an MCP client to talk to the server and then issuing prompts through your client of choice. The server exposes a suite of specialized MCP tools that you can invoke from Claude Code, Claude Desktop, or other MCP-compatible clients. Your prompts can be processed by Gemini CLI models, OpenRouter models, or cross-model collaborations across multiple AI providers.
In typical workflows you can perform plan evaluation, code review, and multi-AI collaboration. The server coordinates the selected models, enforces token limits, and provides a structured workflow that helps you compare results, aggregate insights, and produce consolidated reports.
Prerequisites: You need Python 3.10 or higher and Node.js for Gemini CLI. A modern Linux, macOS, or Windows environment is supported.
# Prerequisites: install uv (recommended)
curl -LsSf https://astral.sh/uv/install.sh | sh
# Clone the MCP server project
git clone https://github.com/centminmod/gemini-cli-mcp-server.git
cd gemini-cli-mcp-server
# Create and activate a Python virtual environment
uv venv
source .venv/bin/activate
# Install Python dependencies
uv pip install -r requirements.txt
# Install Gemini CLI globally
npm install -g @google-ai/gemini-cli
# Configure Gemini API key (replace with your key)
gemini config set api_key YOUR_GEMINI_API_KEY
# Verify installations
gemini --version
python mcp_server.py --helpThe server expects you to provide the Gemini CLI path and the API key. Typical startup involves running the MCP server file directly in a Python virtual environment and then using your MCP clients to connect.
If you plan to enable OpenRouter and multiple model providers, you will also configure an API key for OpenRouter and select a default model. This lets you access 400+ AI models and manage model fallbacks automatically when quotas are reached.
If you encounter issues starting the server, verify that the Python environment is active and that the Gemini CLI is accessible from your PATH. Check that the API key is correctly configured and that the required ports are not blocked by your firewall.
For common client issues, ensure you are using absolute paths in your MCP client configuration and that the client can reach the MCP server endpoint. Review server logs for error messages and use the built-in metrics tool to understand performance and usage.
Leverage the 33 MCP tools to implement complex workflows such as multi-AI collaboration, content analysis, and structured code reviews. Use OpenRouter to compare responses across Gemini CLI models and OpenRouter providers, and enable conversation history to maintain multi-turn context across interactions.
Execute Gemini CLI commands with error handling and structured results.
Get cached Gemini CLI help information (30-minute TTL).
Get cached Gemini CLI version information (30-minute TTL).
Send prompts with structured parameters and validation (100,000 char limit).
List all available Gemini AI models.
Get server performance metrics and statistics.
Execute prompts in sandbox mode for code execution (200,000 char limit).
Get cache statistics for all cache backends.
Get rate limiting and quota statistics.
Summarize content with focus-specific analysis (400,000 char limit).
File-based summarization using @filename syntax (800,000 char limit).
Evaluate implementation plans for code or architecture (500,000 char limit).
Review code with detailed analysis (300,000 char limit).
Comprehensive verification of complete solutions (800,000 char limit).
Start a new stateful conversation with an ID.
Continue an existing conversation with context history.
List active conversations with metadata.
Clear or delete a specific conversation.
Get conversation system statistics and health.
Structured code analysis with a focus on maintainability, security, and quality (NEW).
Schema-based data extraction from content (NEW).
Analyze git diffs with contextual feedback (NEW).
Advanced multi-source content comparison and analysis (NEW).
Coordinate multi-model collaboration including debates and validations.
Test OpenRouter connectivity and client functionality.
Get responses from 400+ models via OpenRouter with file support.
List all available OpenRouter models with filters.
Compare responses across Gemini CLI and OpenRouter models.
OpenRouter usage statistics and costs for the session.