home / mcp / pal mcp server

PAL MCP Server

The power of Claude Code / GeminiCLI / CodexCLI + [Gemini / OpenAI / OpenRouter / Azure / Grok / Ollama / Custom Model / All Of The Above] working as one.

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "beehiveinnovations-pal-mcp-server": {
      "command": "bash",
      "args": [
        "-c",
        "for p in $(which uvx 2>/dev/null) $HOME/.local/bin/uvx /opt/homebrew/bin/uvx /usr/local/bin/uvx uvx; do [ -x \\\"$p\\\" ] && exec \\\"$p\\\" --from git+https://github.com/BeehiveInnovations/pal-mcp-server.git pal-mcp-server; done; echo 'uvx not found' >&2; exit 1"
      ],
      "env": {
        "PATH": "/usr/local/bin:/usr/bin:/bin:/opt/homebrew/bin:~/.local/bin",
        "LOG_LEVEL": "INFO",
        "DEFAULT_MODEL": "auto",
        "DISABLED_TOOLS": "analyze,refactor,testgen,secaudit,docgen,tracer",
        "GEMINI_API_KEY": "your-key-here",
        "OPENAI_API_KEY": "your-openai-key",
        "OPENROUTER_API_KEY": "your-openrouter-key",
        "MAX_CONVERSATION_TURNS": "50",
        "CONVERSATION_TIMEOUT_HOURS": "6"
      }
    }
  }
}

PAL MCP Server enables you to orchestrate multiple AI models and toolchains within a single, cohesive workflow. You connect your preferred AI providers, manage conversation continuity across models, and run complex, multi-step tasks like code reviews, debugging, and orchestration with centralized control. This enables deeper insights, scalable collaboration, and robust automation for your development processes.

How to use

You use PAL MCP Server by connecting an MCP client or integration that can communicate with the server. Start a session, choose the models or providers you want to involve, and begin a multi-model, multi-tool workflow. You can chain tasks across models, preserve conversation context as work progresses, and route specific subtasks to specialized agents (for example, a planner or a code reviewer). Use the centralized controller to orchestrate the flow, capture consensus when needed, and ensure a final, cohesive output.

How to install

Prerequisites you need on your machine: Python 3.10 or newer, Git, and uv installed.

Option A โ€” Clone and automatic setup (recommended) Start here to install, configure, and auto-detect API keys from your environment.

git clone https://github.com/BeehiveInnovations/pal-mcp-server.git
cd pal-mcp-server

# Handles everything: setup, config, API keys from system environment. 
# Auto-configures Claude Desktop, Claude Code, Gemini CLI, Codex CLI, Qwen CLI
# Enable / disable additional settings in .env
./run-server.sh

Option B โ€” Instant setup with uvx

If you prefer a quick, local setup using uvx, add the MCP server entry to your client settings. This configuration snippet wires in a local server wrapper and environment variables for your API keys.

// Add to ~/.claude/settings.json or .mcp.json
// Don't forget to add your API keys under env
{
  "mcpServers": {
    "pal": {
      "command": "bash",
      "args": ["-c", "for p in $(which uvx 2>/dev/null) $HOME/.local/bin/uvx /opt/homebrew/bin/uvx /usr/local/bin/uvx uvx; do [ -x \"$p\" ] && exec \"$p\" --from git+https://github.com/BeehiveInnovations/pal-mcp-server.git pal-mcp-server; done; echo 'uvx not found' >&2; exit 1"],
      "env": {
        "PATH": "/usr/local/bin:/usr/bin:/bin:/opt/homebrew/bin:~/.local/bin",
        "GEMINI_API_KEY": "your-key-here",
        "DISABLED_TOOLS": "analyze,refactor,testgen,secaudit,docgen,tracer",
        "DEFAULT_MODEL": "auto"
      }
    }
  }
}

Start using

Once installed, you can start instructions that leverage multiple models and tools. For example: analyze code with a coordinated team, plan changes with a planner, and validate with a precommit workflow. You control the prompt and determine when and which models to involve.

Provider configuration

PAL activates any provider that has credentials in your environment. You can see more details in the configuration file to customize providers and keys for model access.

Core concepts and tool orchestration

Key collaboration tools include a bridge to external AI CLIs, brainstorming and decision tools, and structured planning and consensus workflows. You can enable or disable tools to optimize token usage and performance. Tools involved support multi-model conversations, context preservation across tools, and robust debugging and code review workflows.

You can customize which tools are enabled, tune model usage, and adjust thinking depth. Remember that enabling more tools increases token usage, so enable only what you need for your workflows.

Configuration and tool setup

Default settings enable core collaboration tools, essential code quality tools, and rapid API lookups. You can adjust which tools are active by editing environment variables or MCP settings to tailor your session.

To enable additional tools, modify the DISABLED_TOOLS setting so that the list excludes the tools you want active, then restart your session to apply changes.

// In ~/.claude/settings.json or .mcp.json
{
  "mcpServers": {
    "pal": {
      "env": {
        "DISABLED_TOOLS": "refactor,testgen,secaudit,docgen,tracer",
        "DEFAULT_MODEL": "pro",
        "DEFAULT_THINKING_MODE_THINKDEEP": "high",
        "GEMINI_API_KEY": "your-gemini-key",
        "OPENAI_API_KEY": "your-openai-key",
        "OPENROUTER_API_KEY": "your-openrouter-key",
        "LOG_LEVEL": "INFO",
        "CONVERSATION_TIMEOUT_HOURS": "6",
        "MAX_CONVERSATION_TURNS": "50"
      }
    }
  }
}

Examples and workflows

Multi-model code review: combine models to review, plan, implement, and validate changes across a sequence of steps, preserving context to ensure alignment.

Collaborative debugging and architecture planning are supported through multi-model debates, consensus, and phased implementations.

Watch tools in action

Visual demonstrations and examples show how the toolchain coordinates models and tools across tasks like planning, review, and implementation.

Key features

AI orchestration with auto model selection, multi-model workflows, conversation continuity, and context revival. Model support spans multiple providers, vision capabilities, local model support, and the ability to bypass token limits when needed.

Example workflows

Multi-model code review, collaborative debugging, and architecture planning examples illustrate practical usage and outcomes.

Notes and advanced usage

Explore an advanced usage guide for power-user features, model configuration, and complex workflows. You will find practical guidance on configuring providers, managing tool activation, and optimizing performance.

Available tools

clink

Bridge requests to external AI CLIs, enabling CLI-to-CLI collaboration and subagents within a single workflow.

chat

Brainstorm ideas, validate approaches, and generate implementations across multiple models within a single conversation.

thinkdeep

Perform extended reasoning, explore edge cases, and evaluate alternative perspectives.

planner

Break down complex projects into structured, actionable plans.

consensus

Elicit expert opinions from multiple models to form a consensus on decisions.

debug

Systematic investigation and root cause analysis of issues.

precommit

Validate changes before committing to prevent regressions.

codereview

Professional code reviews with severity levels and actionable feedback.

analyze

Understand architecture, patterns, and dependencies across codebases (optional, may be disabled by default).

refactor

Intelligent code refactoring with decomposition and restructuring insights.

testgen

Generate tests, including edge cases, to improve coverage.

secaudit

Security audits with OWASP Top 10 analysis.

docgen

Generate documentation with complexity and usage analysis.

apilookup

Lookup current-year API/SDK docs to ensure up-to-date information.

challenge

Promotes critical thinking by challenging assumptions and avoiding reflexive agreement.

tracer

Static analysis prompts to map call flows (optional).