home / mcp / code crosscheck mcp server

Code Crosscheck MCP Server

Provides cross-model bias-aware code review by evaluating code with distinct models and prompts.

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "olaservo-mcp-code-crosscheck": {
      "command": "node",
      "args": [
        "path/to/mcp-code-crosscheck/dist/index.js"
      ]
    }
  }
}

You are using the MCP Code Crosscheck server to help assess code with bias mitigation strategies. It supports two review modes, bias-aware and adversarial, and can operate with different AI models to reduce self-bias in code evaluation. You can run it locally or connect via an MCP client to incorporate structured, comparative code reviews into your workflow.

How to use

You can use the server as part of an MCP client workflow to review code with bias mitigation. The server can detect the AI model that generated code from commit authors and then perform a bias-aware review by default or an adversarial review when you need a thorough, security-focused check. The output is structured to highlight issues, provide metrics, and offer alternatives.

How to install

Prerequisites: Node.js and npm must be installed on your machine.

# Clone the project
git clone <repository-url>
cd mcp-code-crosscheck

# Install dependencies
npm install

# Build the project
npm run build

Additional setup and usage notes

To run the MCP server locally, you use a standard runtime command and provide the path to the built entry file. The following configuration snippet shows how to expose the server to your MCP client. You can add this to your MCP client configuration.

{
  "mcpServers": {
    "code_crosscheck": {
      "command": "node",
      "args": ["path/to/mcp-code-crosscheck/dist/index.js"],
      "env": {}
    }
  }
}

Notes on how it works

The server offers two review modes. Bias-aware aims to ignore known bias triggers during evaluation, while adversarial provides a thorough review with a more critical framing. Cross-model review is supported by attempting to use a different generation model for evaluation when possible. You can explicitly request adversarial mode for security-critical checks.

Security and best practices

Use bias-aware mode for regular development to reduce false positives from style or comment biases. Reserve adversarial mode for security-critical paths where you are willing to accept higher false positives or more aggressive review framing. Always corroborate MCP reviews with static analysis and manual review.

Troubleshooting

If the server does not start, verify that the built index file exists at the specified path and that Node.js is available on your system. Check your MCP client configuration to ensure the mcpServers entry is correctly referenced.

Available tools

review_code

Comprehensive code review covering security, performance, and maintainability with a structured checklist output

detect_model_from_authors

Detect AI models from commit author information to guide model selection during reviews

fetch_commit

Fetch commit details using the integrated GitHub MCP server or CLI fallback to provide context for reviews

fetch_pr_commits

Fetch PR commits using the MCP workflow to analyze changes across pull requests

Code Crosscheck MCP Server - olaservo/mcp-code-crosscheck