A multi-model collaboration platform that orchestrates multiple AI models and tools for code analysis, planning, and execution.
Configuration
View docs{
"mcpServers": {
"beehiveinnovations-pal-mcp-server": {
"command": "bash",
"args": [
"-c",
"for p in $(which uvx 2>/dev/null) $HOME/.local/bin/uvx /opt/homebrew/bin/uvx /usr/local/bin/uvx uvx; do [ -x \\\"$p\\\" ] && exec \\\"$p\\\" --from git+https://github.com/BeehiveInnovations/pal-mcp-server.git pal-mcp-server; done; echo 'uvx not found' >&2; exit 1"
],
"env": {
"PATH": "/usr/local/bin:/usr/bin:/bin:/opt/homebrew/bin:~/.local/bin",
"DEFAULT_MODEL": "auto",
"DISABLED_TOOLS": "analyze,refactor,testgen,secaudit,docgen,tracer",
"GEMINI_API_KEY": "your-key-here"
}
}
}
}PAL MCP enables orchestration of multiple AI models and tools through a single provider-abstracted server. It lets you connect external AI CLIs, run multi-model workflows, and maintain conversation continuity across models, making complex development tasks like code reviews, planning, and debugging more efficient from a unified control plane.
Use PAL MCP with an MCP client to orchestrate multiple AI models and tools from a single endpoint. Start by cloning the MCP server, then run the server so your MCP client can connect. If you prefer quick start, you can also set up a local MCP runner with uvx to point at the PAL MCP server. Once running, you can perform multi-model workflows, plan with the planner tool, and consult multiple models for consensus, reviews, and debugging while preserving context across tools.
Prerequisites: Python 3.10+, Git, uv installed.
Option A: Clone and Automatic Setup (recommended)
git clone https://github.com/BeehiveInnovations/pal-mcp-server.git
cd pal-mcp-server
# Handles everything: setup, config, API keys from system environment.
# Auto-configures Claude Desktop, Claude Code, Gemini CLI, Codex CLI, Qwen CLI
# Enable / disable additional settings in .env
./run-server.shOption B: Instant Setup with uvx
// Add to ~/.claude/settings.json or .mcp.json
// Don't forget to add your API keys under env
{
"mcpServers": {
"pal": {
"command": "bash",
"args": ["-c", "for p in $(which uvx 2>/dev/null) $HOME/.local/bin/uvx /opt/homebrew/bin/uvx /usr/local/bin/uvx uvx; do [ -x \"$p\" ] && exec \"$p\" --from git+https://github.com/BeehiveInnovations/pal-mcp-server.git pal-mcp-server; done; echo 'uvx not found' >&2; exit 1"],
"env": {
"PATH": "/usr/local/bin:/usr/bin:/bin:/opt/homebrew/bin:~/.local/bin",
"GEMINI_API_KEY": "your-key-here",
"DISABLED_TOOLS": "analyze,refactor,testgen,secaudit,docgen,tracer",
"DEFAULT_MODEL": "auto"
}
}
}
}After installation, use the PAL MCP server through your MCP client to run multi-model workflows, get consensus, and perform code reviews with agents like planner, codereviewer, and other tools. You can compose prompts that involve multiple models and maintain continuity across the conversation as you work.
PAL activates any provider that has credentials in your .env. See .env.example for deeper customization.
Note: Tools are enabled or disabled via configuration. Core collaboration tools include clink (CLI bridge), chat (brainstorming and implementation), thinkdeep (extended reasoning), planner (structured plans), and consensus (multi-model opinions). Essential code quality tools include codereview, precommit, and debug. You can enable or disable additional tools such as analyze, refactor, testgen, secaudit, docgen, and tracer as needed.
Multi-model Code Review: Use gemini pro and o3, then planner to create a fix strategy and coordinate across models for implementation and pre-commit checks.
Documentation covers getting started, tools reference, advanced usage, configuration, and provider additions. Troubleshooting and contribution guidelines are available as well.
This MCP server configuration is designed to be used with the PAL MCP server described in this README. If you need to customize environments or add providers, follow the configuration examples in the repository’s MCP setup documentation.
Complete setup guide and troubleshooting are available in the repository. The PAL MCP server is designed to simplify multi-model orchestration across AI tools and models with an emphasis on conversation continuity and controlled workflows.
Bridge requests to external AI CLIs and manage subagents for isolated contexts
Brainstorm ideas, validate approaches, and generate code or implementations across models
Perform extended reasoning and explore edge cases beyond initial assumptions
Break down complex projects into structured, actionable plans
Gather expert opinions from multiple models to reach a decision
Systematic investigation and root cause analysis of issues
Validate changes before committing to prevent regressions
Professional reviews with severity levels and actionable feedback
Understand architecture and dependencies across codebases (optional)
Intelligent code refactoring and decomposition (optional)
Generate comprehensive tests including edge cases (optional)
Security audits focusing on common vulnerabilities (optional)
Generate documentation with complexity and coverage insights (optional)
Fetch current-year API/SDK documentation to keep prompts in sync
Encourage critical thinking and prevent reflexive agreement
Static analysis prompts for call-flow mapping (optional)