Home / MCP / Deep Code Reasoning MCP Server

Deep Code Reasoning MCP Server

Provides multi-model code reasoning by coordinating Claude Code and Gemini for deep analysis and debugging across large codebases.

typescript
Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
    "mcpServers": {
        "deep_code_reasoning": {
            "command": "node",
            "args": [
                "/path/to/deep-code-reasoning-mcp/dist/index.js"
            ],
            "env": {
                "GEMINI_API_KEY": "YOUR_GEMINI_API_KEY"
            }
        }
    }
}

You set up and run an MCP Server that coordinates Claude Code with Gemini to analyze, debug, and optimize large codebases. It lets Claude handle local, file-scoped tasks while Gemini handles massive context analysis, execution traces, and cross-system debugging, giving you a powerful, multi-model workflow for complex software projects.

How to use

You use an MCP client to connect to the server, then start by letting Claude Code perform initial analysis. When tasks require broader context, you escalate to Gemini through the MCP Router to distribute work across models. The server returns comprehensive context including code, logs, and traces, so Claude can implement fixes with evidence-backed changes. Use this flow for deep trace analysis, cross-service impact reviews, and hypothesis-driven debugging.

How to install

Prerequisites: Node.js 18 or later, a Google Cloud account with Gemini API access, and a Gemini API key.

Step-by-step installation and setup you can follow locally.

git clone https://github.com/Haasonsaas/deep-code-reasoning-mcp.git
cd deep-code-reasoning-mcp
npm install
cp .env.example .env
# Edit .env and add your GEMINI_API_KEY
npm run build

Configuration

Set the Gemini API key as an environment variable and configure Claude Desktop so it can communicate with the MCP server.

{
  "mcpServers": {
    "deep-code-reasoning": {
      "command": "node",
      "args": ["/path/to/deep-code-reasoning-mcp/dist/index.js"],
      "env": {
        "GEMINI_API_KEY": "your-gemini-api-key"
      }
    }
  }
}

Security considerations

Store your Gemini API key securely in environment variables. The server reads local files as part of analysis, so ensure proper file permissions and restrict access to sensitive project data. Review Gemini’s data handling policies for how code and traces are processed.

Troubleshooting

If you encounter issues, verify that GEMINI_API_KEY exists in your environment, ensure the MCP server command matches what you run locally, and check file permissions for the analysis targets.

Best practices for multi-model debugging

Capture a trace timeline first using distributed tracing. Start with Claude Code for quick investigation and fixes, then escalate to Gemini for long-context analysis, cross-service correlation, and synthetic testing. Combine MCP results with traditional debugging tools to verify fixes.

Development

Development commands you will use regularly for this MCP server.

# Run in development mode
npm run dev

# Run tests
npm test

# Lint code
npm run lint

# Type check
npm run typecheck

Architecture

The system consists of Claude Code on the left, an MCP Server Router in the middle, and Gemini API on the right. Claude handles fast, local analysis; the MCP Server orchestrates tasks and context gathering; Gemini performs deep context analysis and code execution when needed.

Notes

The server is designed to treat LLMs as heterogeneous microservices. Route tasks to Claude for local-context work and to Gemini for large-scale context, synthetic testing, and cross-system analysis.

Available tools

start_conversation

Initiates a conversational analysis session between Claude and Gemini to coordinate multi-turn analysis and dialog-based problem solving.

continue_conversation

Sends a follow-up message to Gemini within an active session to carry on iterative reasoning and expand context.

finalize_conversation

Completes the conversation and returns a structured, actionable analysis summary.

get_conversation_status

Retrieves the current status and progress of an ongoing Claude-Gemini conversation.

escalate_analysis

Handoffs complex analysis from Claude to Gemini with detailed context and a depth parameter.

trace_execution_path

Performs deep execution analysis with semantic context, focusing on data flow and state changes.

cross_system_impact

Analyzes how changes affect across service boundaries and system topology.

performance_bottleneck

Analyzes deep performance issues, identifying bottlenecks across code paths.

hypothesis_test

Tests specific theories about code behavior using structured approaches and evidence.