Home / MCP / PAL MCP Server

PAL MCP Server

A multi-model MCP server that coordinates AI models and external CLIs for enhanced code analysis, planning, and collaboration.

javascript
10.1kstars
Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
    "mcpServers": {
        "pal": {
            "command": "bash",
            "args": [
                "-lc",
                "./run-server.sh"
            ],
            "env": {
                "GEMINI_API_KEY": "your-gemini-key",
                "DISABLED_TOOLS": "analyze,refactor,testgen,secaudit,docgen,tracer",
                "DEFAULT_MODEL": "auto",
                "PATH": "/usr/local/bin:/usr/bin:/bin:/opt/homebrew/bin:~/.local/bin"
            }
        }
    }
}

PAL MCP is a multi-model collaboration layer that lets you orchestrate multiple AI models and external CLIs from a single control plane. It enables conversation continuity, cross-model workflows, and strategic delegation of tasks like code reviews, planning, and debugging, all while keeping your CLI in charge of the workflow.

How to use

You use PAL MCP by connecting your preferred MCP client to a central server that coordinates multiple AI models and tools. Start a session, choose which models or external CLIs to involve, and define your workflow. PAL MCP handles multi-model orchestration, maintains conversation continuity across tools, and allows you to revive context across resets or switches between models. You can run tasks like multi-model code reviews, collaborative debugging, and architecture planning—all in a single prompt flow.

How to install

Prerequisites you need before installing: Python 3.10 or newer, Git, and uv (or an equivalent runtime) installed on your system.

Choose an installation approach and follow the steps exactly as shown.

Option A: Clone and automatic setup (recommended) is designed to configure everything for you and auto-detect common MCP endpoints.

git clone https://github.com/BeehiveInnovations/pal-mcp-server.git
cd pal-mcp-server

# Handles everything: setup, config, API keys from system environment. 
# Auto-configures Claude Desktop, Claude Code, Gemini CLI, Codex CLI, Qwen CLI
# Enable / disable additional settings in .env
./run-server.sh

Additional configuration and setup notes

If you prefer a manual, environment-driven setup, you can configure a local MCP instance using an initialization snippet that launches the server through a shell command. This method relies on a shell to start the server in a fresh context and pass credentials via environment variables.

Available tools

clink

Bridge external AI CLIs into the workflow and allow subagents to run isolated tasks with context isolation and role specialization

chat

Collaborative brainstorming and pattern generation across multiple models with conversation continuity

thinkdeep

Extended reasoning and edge-case analysis to explore alternative approaches and deep insights

planner

Decompose complex projects into structured, actionable plans

consensus

Multi-model debate and expert opinions to reach informed decisions

debug

Systematic root cause analysis and investigation of issues across the codebase

precommit

Validate changes before committing to prevent regressions

codereview

Professional code reviews with severity levels and actionable feedback

analyze

Optional architecture and pattern analysis across the codebase

apilookup

Live API/SDK documentation lookups to stay current during workflows

challenge

Critical thinking prompts to prevent reflexive agreement and encourage deep assessment

tracer

Static analysis prompts for call-flow mapping and traceability