Deep Code Reasoning MCP server

Enables intelligent routing between Claude and Google's Gemini AI for complementary code analysis, leveraging Gemini's 1M token context window for large codebase analysis while Claude handles local operations, with conversational AI-to-AI dialogue capabilities for multi-turn problem-solving sessions.
Back to servers
Setup instructions
Provider
Jonathan Haas
Release date
Jun 12, 2025
Language
Go
Stats
72 stars

The Deep Code Reasoning MCP Server pairs Claude Code with Google's Gemini AI for complementary code analysis. This server enables intelligent routing between these models, allowing Claude to handle local-context operations while Gemini tackles huge-context analysis with its 1M token capacity and code execution capabilities.

Installation

Prerequisites

  • Node.js 18 or later
  • Google Cloud account with Gemini API access
  • Gemini API key from Google AI Studio

Manual Setup

  1. Clone the repository:
git clone https://github.com/Haasonsaas/deep-code-reasoning-mcp.git
cd deep-code-reasoning-mcp
  1. Install dependencies:
npm install
  1. Set up your Gemini API key:
cp .env.example .env
# Edit .env and add your GEMINI_API_KEY
  1. Build the project:
npm run build

Configuration

Environment Variables

  • GEMINI_API_KEY (required): Your Google Gemini API key

Claude Desktop Configuration

Add to your Claude Desktop configuration (~/Library/Application Support/Claude/claude_desktop_config.json):

{
  "mcpServers": {
    "deep-code-reasoning": {
      "command": "node",
      "args": ["/path/to/deep-code-reasoning-mcp/dist/index.js"],
      "env": {
        "GEMINI_API_KEY": "your-gemini-api-key"
      }
    }
  }
}

Usage

How It Works

  1. Claude Code performs initial analysis using its strengths in multi-file refactoring
  2. When beneficial, Claude escalates to this MCP server - particularly for:
    • Analyzing large log/trace dumps exceeding Claude's context
    • Running hypothesis testing with code execution
    • Correlating failures across microservices
  3. Server prepares comprehensive context including code, logs, and traces
  4. Gemini analyzes with its 1M-token context
  5. Results returned to Claude Code for implementation of fixes

Available Tools

Conversational Analysis Tools

These tools enable Claude and Gemini to engage in multi-turn dialogues:

start_conversation

Initiates an analysis session between Claude and Gemini.

{
  claude_context: {
    attempted_approaches: string[];      // What Claude tried
    partial_findings: any[];            // What Claude found
    stuck_description: string;          // Where Claude got stuck
    code_scope: {
      files: string[];                  // Files to analyze
      entry_points?: CodeLocation[];    // Starting points
      service_names?: string[];         // Services involved
    }
  };
  analysis_type: 'execution_trace' | 'cross_system' | 'performance' | 'hypothesis_test';
  initial_question?: string;            // Optional opening question
}
continue_conversation

Continues an active conversation with Claude's response.

{
  session_id: string;                   // Active session ID
  message: string;                      // Claude's message to Gemini
  include_code_snippets?: boolean;      // Enrich with code context
}
finalize_conversation

Completes the conversation and generates results.

{
  session_id: string;                   // Active session ID
  summary_format: 'detailed' | 'concise' | 'actionable';
}
get_conversation_status

Checks the status of an ongoing conversation.

{
  session_id: string;                   // Session ID to check
}

Traditional Analysis Tools

escalate_analysis

Main tool for handing off complex analysis from Claude Code to Gemini.

{
  claude_context: {
    attempted_approaches: string[];      // What Claude tried
    partial_findings: any[];            // What Claude found
    stuck_description: string;          // Where Claude got stuck
    code_scope: {
      files: string[];                  // Files to analyze
      entry_points?: CodeLocation[];    // Starting points
      service_names?: string[];         // Services involved
    }
  };
  analysis_type: 'execution_trace' | 'cross_system' | 'performance' | 'hypothesis_test';
  depth_level: 1-5;                     // Analysis depth
  time_budget_seconds?: number;         // Time limit (default: 60)
}
trace_execution_path

Deep execution analysis with Gemini's semantic understanding.

{
  entry_point: {
    file: string;
    line: number;
    function_name?: string;
  };
  max_depth?: number;              // Default: 10
  include_data_flow?: boolean;     // Default: true
}
cross_system_impact

Analyze impacts across service boundaries.

{
  change_scope: {
    files: string[];
    service_names?: string[];
  };
  impact_types?: ('breaking' | 'performance' | 'behavioral')[];
}
performance_bottleneck

Deep performance analysis beyond simple profiling.

{
  code_path: {
    entry_point: {
      file: string;
      line: number;
      function_name?: string;
    };
    suspected_issues?: string[];
  };
  profile_depth?: 1-5;              // Default: 3
}
hypothesis_test

Test specific theories about code behavior.

{
  hypothesis: string;
  code_scope: {
    files: string[];
    entry_points?: CodeLocation[];    // Optional array of {file, line, function_name?}
  };
  test_approach: string;
}

Example Use Cases

Conversational Analysis Example

// 1. Start conversation
const session = await start_conversation({
  claude_context: {
    attempted_approaches: ["Checked for N+1 queries", "Profiled database calls"],
    partial_findings: [{ type: "performance", description: "Multiple DB queries in loop" }],
    stuck_description: "Can't determine if queries are optimizable",
    code_scope: { files: ["src/services/UserService.ts"] }
  },
  analysis_type: "performance",
  initial_question: "Are these queries necessary or can they be batched?"
});

// 2. Continue with follow-ups
const response = await continue_conversation({
  session_id: session.sessionId,
  message: "The queries fetch user preferences. Could we use a join instead?",
  include_code_snippets: true
});

// 3. Finalize when ready
const results = await finalize_conversation({
  session_id: session.sessionId,
  summary_format: "actionable"
});

Distributed Trace Analysis

When a failure signature spans multiple services with GB of logs:

  • Claude Code identifies the error pattern and suspicious code
  • Escalate to Gemini to correlate thousands of trace spans across services
  • Gemini processes the full trace timeline and identifies the exact issues

Performance Regression Hunting

When performance degrades but the cause isn't obvious:

  • Claude Code performs quick profiling and identifies hot paths
  • Escalate to Gemini to analyze weeks of performance metrics and code changes
  • Gemini correlates deployment timeline with metrics and pinpoints the exact cause

Troubleshooting

"GEMINI_API_KEY not found"

  • Ensure you've set the GEMINI_API_KEY in your .env file or environment
  • Check that the .env file is in the project root

"File not found" errors

  • Verify that file paths passed to the tools are absolute paths
  • Check file permissions

Gemini API errors

  • Verify your API key is valid and has appropriate permissions
  • Check API quotas and rate limits
  • Ensure your Google Cloud project has the Gemini API enabled

Validation errors

  • Ensure all required parameters are provided
  • Check that parameter names use snake_case (e.g., claude_context, not claudeContext)
  • Review error messages for specific validation requirements

How to install this MCP server

For Claude Code

To add this MCP server to Claude Code, run this command in your terminal:

claude mcp add-json "deep-code-reasoning" '{"command":"node","args":["/path/to/deep-code-reasoning-mcp/dist/index.js"],"env":{"GEMINI_API_KEY":"your-gemini-api-key"}}'

See the official Claude Code MCP documentation for more details.

For Cursor

There are two ways to add an MCP server to Cursor. The most common way is to add the server globally in the ~/.cursor/mcp.json file so that it is available in all of your projects.

If you only need the server in a single project, you can add it to the project instead by creating or adding it to the .cursor/mcp.json file.

Adding an MCP server to Cursor globally

To add a global MCP server go to Cursor Settings > Tools & Integrations and click "New MCP Server".

When you click that button the ~/.cursor/mcp.json file will be opened and you can add your server like this:

{
    "mcpServers": {
        "deep-code-reasoning": {
            "command": "node",
            "args": [
                "/path/to/deep-code-reasoning-mcp/dist/index.js"
            ],
            "env": {
                "GEMINI_API_KEY": "your-gemini-api-key"
            }
        }
    }
}

Adding an MCP server to a project

To add an MCP server to a project you can create a new .cursor/mcp.json file or add it to the existing one. This will look exactly the same as the global MCP server example above.

How to use the MCP server

Once the server is installed, you might need to head back to Settings > MCP and click the refresh button.

The Cursor agent will then be able to see the available tools the added MCP server has available and will call them when it needs to.

You can also explicitly ask the agent to use the tool by mentioning the tool name and describing what the function does.

For Claude Desktop

To add this MCP server to Claude Desktop:

1. Find your configuration file:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
  • Linux: ~/.config/Claude/claude_desktop_config.json

2. Add this to your configuration file:

{
    "mcpServers": {
        "deep-code-reasoning": {
            "command": "node",
            "args": [
                "/path/to/deep-code-reasoning-mcp/dist/index.js"
            ],
            "env": {
                "GEMINI_API_KEY": "your-gemini-api-key"
            }
        }
    }
}

3. Restart Claude Desktop for the changes to take effect

Want to 10x your AI skills?

Get a free account and learn to code + market your apps using AI (with or without vibes!).

Nah, maybe later