home / mcp / mcp rubber duck mcp server
Bridges OpenAI-compatible HTTP endpoints and CLI agents to orchestrate multi-LLM workflows with voting, debates, and iterative refinement.
Configuration
View docs{
"mcpServers": {
"nesquikm-mcp-rubber-duck": {
"command": "npx",
"args": [
"mcp-rubber-duck"
],
"env": {
"LOG_LEVEL": "info",
"MCP_SERVER": "true",
"GROQ_API_KEY": "YOUR_GROQ_API_KEY",
"GEMINI_API_KEY": "YOUR_GEMINI_API_KEY",
"OPENAI_API_KEY": "YOUR_OPENAI_API_KEY",
"CUSTOM_{NAME}_*": "CUSTOM_OPENAI_API_KEY",
"DEFAULT_PROVIDER": "openai",
"MCP_BRIDGE_ENABLED": "true",
"DEFAULT_TEMPERATURE": "0.7"
}
}
}
}You can run MCP Rubber Duck to query and compare multiple language models through OpenAI-compatible HTTP endpoints and CLI coding agents. It provides a unified interface to manage conversations, compare results side-by-side, vote on responses, and iteratively refine answers across several ducks, making it easier to debug, evaluate, and orchestrate multi-LLM workflows.
Install and start the MCP Rubber Duck server, then connect your MCP client to manage multiple ducks. Use the built-in tools to ask questions to several providers, view side-by-side comparisons, run consensus voting, and initiate debates or iterative refinements between ducks. You can also enable the MCP Bridge to connect to other MCP servers and use guardrails to control safety, rate limits, and PII redaction.
Prerequisites: install Node.js version 20 or higher and a package manager such as npm or yarn. You should also have at least one API key for an HTTP provider, or a CLI coding agent installed locally.
# Install the MCP Rubber Duck globally
npm install -g mcp-rubber-duck
# Or run directly with npx in your environment
npx mcp-rubber-duckConfigure your environment and MCP settings using a .env file or a JSON config. You can specify API keys for HTTP providers, default provider and temperature, log levels, and how the server runs. The following environment variables are commonly used:
{
"OPENAI_API_KEY": "YOUR_OPENAI_API_KEY",
"GEMINI_API_KEY": "YOUR_GEMINI_API_KEY",
"GROQ_API_KEY": "YOUR_GROQ_API_KEY",
"DEFAULT_PROVIDER": "openai",
"DEFAULT_TEMPERATURE": 0.7,
"LOG_LEVEL": "info",
"MCP_SERVER": true,
"MCP_BRIDGE_ENABLED": true
}- Create multiple ducks configured to different providers and query them in parallel to compare responses. - Use the Duck Council to gather responses from all configured ducks at once. - Run consensus voting to obtain a ranked decision with reasoning and confidence scores. - Engage in structured debates or iterative refinement to improve outputs.
If a provider isn’t responding or you encounter rate limiting, verify your API keys and endpoints, then check health status with the provided tools. Ensure local CLI agents or local runtimes required by certain providers are running and accessible. Review logs for detailed error messages to pinpoint misconfigurations.
Leverage guardrails to enforce rate limits, per-server token limits, and PII redaction. Use per-provider approvals for sensitive actions and monitor health checks to detect unhealthy providers. Apply session-based approvals to manage access to MCP features.
- Quick multi-duck query: ask the same question to multiple providers and view a side-by-side comparison. - Consensus voting: collect votes from ducks with explanations and confidence, then select the top-ranked response. - Iterative refinement: two ducks collaboratively improve a single answer through back-and-forth revisions.
Explore the available prompts, tools, and prompts templates to structure multi-LLM workflows. Set up the required providers, enable MCP Apps for rich UIs, and refer to setup guides for provider-specific considerations.
Keep dependencies up to date and monitor health checks for providers. Regularly review guardrails and usage tracking to understand costs and ensure compliant behavior across all ducks.
Ask a single question to a specific LLM provider.
Conversation with context maintained across messages.
Clear all stored conversation history.
List configured providers and their health status.
List available models for providers.
Ask the same question to multiple providers in parallel.
Collect and view responses from all configured ducks.
Show usage statistics and estimated costs per provider.
Multi-duck voting with reasoning and confidence scores.
Have one duck evaluate and rank the others’ responses.
Iteratively refine a response between two ducks.
Structured multi-round debates between ducks.
Check MCP Bridge status and connected servers.
See pending MCP tool approval requests.
Approve or deny a duck's MCP tool request.