home / mcp / pal mcp server

PAL MCP Server

A multi-model collaboration platform that orchestrates multiple AI models and tools for code analysis, planning, and execution.

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "beehiveinnovations-pal-mcp-server": {
      "command": "bash",
      "args": [
        "-c",
        "for p in $(which uvx 2>/dev/null) $HOME/.local/bin/uvx /opt/homebrew/bin/uvx /usr/local/bin/uvx uvx; do [ -x \\\"$p\\\" ] && exec \\\"$p\\\" --from git+https://github.com/BeehiveInnovations/pal-mcp-server.git pal-mcp-server; done; echo 'uvx not found' >&2; exit 1"
      ],
      "env": {
        "PATH": "/usr/local/bin:/usr/bin:/bin:/opt/homebrew/bin:~/.local/bin",
        "DEFAULT_MODEL": "auto",
        "DISABLED_TOOLS": "analyze,refactor,testgen,secaudit,docgen,tracer",
        "GEMINI_API_KEY": "your-key-here"
      }
    }
  }
}

PAL MCP enables orchestration of multiple AI models and tools through a single provider-abstracted server. It lets you connect external AI CLIs, run multi-model workflows, and maintain conversation continuity across models, making complex development tasks like code reviews, planning, and debugging more efficient from a unified control plane.

How to use

Use PAL MCP with an MCP client to orchestrate multiple AI models and tools from a single endpoint. Start by cloning the MCP server, then run the server so your MCP client can connect. If you prefer quick start, you can also set up a local MCP runner with uvx to point at the PAL MCP server. Once running, you can perform multi-model workflows, plan with the planner tool, and consult multiple models for consensus, reviews, and debugging while preserving context across tools.

How to install

Prerequisites: Python 3.10+, Git, uv installed.

Option A: Clone and Automatic Setup (recommended)

git clone https://github.com/BeehiveInnovations/pal-mcp-server.git
cd pal-mcp-server

# Handles everything: setup, config, API keys from system environment. 
# Auto-configures Claude Desktop, Claude Code, Gemini CLI, Codex CLI, Qwen CLI
# Enable / disable additional settings in .env
./run-server.sh

Option B: Instant Setup with uvx

// Add to ~/.claude/settings.json or .mcp.json
// Don't forget to add your API keys under env
{
  "mcpServers": {
    "pal": {
      "command": "bash",
      "args": ["-c", "for p in $(which uvx 2>/dev/null) $HOME/.local/bin/uvx /opt/homebrew/bin/uvx /usr/local/bin/uvx uvx; do [ -x \"$p\" ] && exec \"$p\" --from git+https://github.com/BeehiveInnovations/pal-mcp-server.git pal-mcp-server; done; echo 'uvx not found' >&2; exit 1"],
      "env": {
        "PATH": "/usr/local/bin:/usr/bin:/bin:/opt/homebrew/bin:~/.local/bin",
        "GEMINI_API_KEY": "your-key-here",
        "DISABLED_TOOLS": "analyze,refactor,testgen,secaudit,docgen,tracer",
        "DEFAULT_MODEL": "auto"
      }
    }
  }
}

Start Using

After installation, use the PAL MCP server through your MCP client to run multi-model workflows, get consensus, and perform code reviews with agents like planner, codereviewer, and other tools. You can compose prompts that involve multiple models and maintain continuity across the conversation as you work.

Provider Configuration

PAL activates any provider that has credentials in your .env. See .env.example for deeper customization.

Core Tools

Note: Tools are enabled or disabled via configuration. Core collaboration tools include clink (CLI bridge), chat (brainstorming and implementation), thinkdeep (extended reasoning), planner (structured plans), and consensus (multi-model opinions). Essential code quality tools include codereview, precommit, and debug. You can enable or disable additional tools such as analyze, refactor, testgen, secaudit, docgen, and tracer as needed.

Example Workflows

Multi-model Code Review: Use gemini pro and o3, then planner to create a fix strategy and coordinate across models for implementation and pre-commit checks.

Quick Links

Documentation covers getting started, tools reference, advanced usage, configuration, and provider additions. Troubleshooting and contribution guidelines are available as well.

Notes

This MCP server configuration is designed to be used with the PAL MCP server described in this README. If you need to customize environments or add providers, follow the configuration examples in the repository’s MCP setup documentation.

Appendix: Quick Start Details

Complete setup guide and troubleshooting are available in the repository. The PAL MCP server is designed to simplify multi-model orchestration across AI tools and models with an emphasis on conversation continuity and controlled workflows.

Available tools

clink

Bridge requests to external AI CLIs and manage subagents for isolated contexts

chat

Brainstorm ideas, validate approaches, and generate code or implementations across models

thinkdeep

Perform extended reasoning and explore edge cases beyond initial assumptions

planner

Break down complex projects into structured, actionable plans

consensus

Gather expert opinions from multiple models to reach a decision

debug

Systematic investigation and root cause analysis of issues

precommit

Validate changes before committing to prevent regressions

codereview

Professional reviews with severity levels and actionable feedback

analyze

Understand architecture and dependencies across codebases (optional)

refactor

Intelligent code refactoring and decomposition (optional)

testgen

Generate comprehensive tests including edge cases (optional)

secaudit

Security audits focusing on common vulnerabilities (optional)

docgen

Generate documentation with complexity and coverage insights (optional)

apilookup

Fetch current-year API/SDK documentation to keep prompts in sync

challenge

Encourage critical thinking and prevent reflexive agreement

tracer

Static analysis prompts for call-flow mapping (optional)