Home / MCP / Deep Code Reasoning MCP Server
Provides multi-model code reasoning by coordinating Claude Code and Gemini for deep analysis and debugging across large codebases.
Configuration
View docs{
"mcpServers": {
"deep_code_reasoning": {
"command": "node",
"args": [
"/path/to/deep-code-reasoning-mcp/dist/index.js"
],
"env": {
"GEMINI_API_KEY": "YOUR_GEMINI_API_KEY"
}
}
}
}You set up and run an MCP Server that coordinates Claude Code with Gemini to analyze, debug, and optimize large codebases. It lets Claude handle local, file-scoped tasks while Gemini handles massive context analysis, execution traces, and cross-system debugging, giving you a powerful, multi-model workflow for complex software projects.
You use an MCP client to connect to the server, then start by letting Claude Code perform initial analysis. When tasks require broader context, you escalate to Gemini through the MCP Router to distribute work across models. The server returns comprehensive context including code, logs, and traces, so Claude can implement fixes with evidence-backed changes. Use this flow for deep trace analysis, cross-service impact reviews, and hypothesis-driven debugging.
Prerequisites: Node.js 18 or later, a Google Cloud account with Gemini API access, and a Gemini API key.
Step-by-step installation and setup you can follow locally.
git clone https://github.com/Haasonsaas/deep-code-reasoning-mcp.git
cd deep-code-reasoning-mcp
npm install
cp .env.example .env
# Edit .env and add your GEMINI_API_KEY
npm run buildSet the Gemini API key as an environment variable and configure Claude Desktop so it can communicate with the MCP server.
{
"mcpServers": {
"deep-code-reasoning": {
"command": "node",
"args": ["/path/to/deep-code-reasoning-mcp/dist/index.js"],
"env": {
"GEMINI_API_KEY": "your-gemini-api-key"
}
}
}
}Store your Gemini API key securely in environment variables. The server reads local files as part of analysis, so ensure proper file permissions and restrict access to sensitive project data. Review Gemini’s data handling policies for how code and traces are processed.
If you encounter issues, verify that GEMINI_API_KEY exists in your environment, ensure the MCP server command matches what you run locally, and check file permissions for the analysis targets.
Capture a trace timeline first using distributed tracing. Start with Claude Code for quick investigation and fixes, then escalate to Gemini for long-context analysis, cross-service correlation, and synthetic testing. Combine MCP results with traditional debugging tools to verify fixes.
Development commands you will use regularly for this MCP server.
# Run in development mode
npm run dev
# Run tests
npm test
# Lint code
npm run lint
# Type check
npm run typecheckThe system consists of Claude Code on the left, an MCP Server Router in the middle, and Gemini API on the right. Claude handles fast, local analysis; the MCP Server orchestrates tasks and context gathering; Gemini performs deep context analysis and code execution when needed.
The server is designed to treat LLMs as heterogeneous microservices. Route tasks to Claude for local-context work and to Gemini for large-scale context, synthetic testing, and cross-system analysis.
Initiates a conversational analysis session between Claude and Gemini to coordinate multi-turn analysis and dialog-based problem solving.
Sends a follow-up message to Gemini within an active session to carry on iterative reasoning and expand context.
Completes the conversation and returns a structured, actionable analysis summary.
Retrieves the current status and progress of an ongoing Claude-Gemini conversation.
Handoffs complex analysis from Claude to Gemini with detailed context and a depth parameter.
Performs deep execution analysis with semantic context, focusing on data flow and state changes.
Analyzes how changes affect across service boundaries and system topology.
Analyzes deep performance issues, identifying bottlenecks across code paths.
Tests specific theories about code behavior using structured approaches and evidence.