home / mcp / mcp rubber duck mcp server

MCP Rubber Duck MCP Server

Bridges OpenAI-compatible HTTP endpoints and CLI agents to orchestrate multi-LLM workflows with voting, debates, and iterative refinement.

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "nesquikm-mcp-rubber-duck": {
      "command": "npx",
      "args": [
        "mcp-rubber-duck"
      ],
      "env": {
        "LOG_LEVEL": "info",
        "MCP_SERVER": "true",
        "GROQ_API_KEY": "YOUR_GROQ_API_KEY",
        "GEMINI_API_KEY": "YOUR_GEMINI_API_KEY",
        "OPENAI_API_KEY": "YOUR_OPENAI_API_KEY",
        "CUSTOM_{NAME}_*": "CUSTOM_OPENAI_API_KEY",
        "DEFAULT_PROVIDER": "openai",
        "MCP_BRIDGE_ENABLED": "true",
        "DEFAULT_TEMPERATURE": "0.7"
      }
    }
  }
}

You can run MCP Rubber Duck to query and compare multiple language models through OpenAI-compatible HTTP endpoints and CLI coding agents. It provides a unified interface to manage conversations, compare results side-by-side, vote on responses, and iteratively refine answers across several ducks, making it easier to debug, evaluate, and orchestrate multi-LLM workflows.

How to use

Install and start the MCP Rubber Duck server, then connect your MCP client to manage multiple ducks. Use the built-in tools to ask questions to several providers, view side-by-side comparisons, run consensus voting, and initiate debates or iterative refinements between ducks. You can also enable the MCP Bridge to connect to other MCP servers and use guardrails to control safety, rate limits, and PII redaction.

How to install

Prerequisites: install Node.js version 20 or higher and a package manager such as npm or yarn. You should also have at least one API key for an HTTP provider, or a CLI coding agent installed locally.

# Install the MCP Rubber Duck globally
npm install -g mcp-rubber-duck

# Or run directly with npx in your environment
npx mcp-rubber-duck

Configuration

Configure your environment and MCP settings using a .env file or a JSON config. You can specify API keys for HTTP providers, default provider and temperature, log levels, and how the server runs. The following environment variables are commonly used:

Additional configuration details

{
  "OPENAI_API_KEY": "YOUR_OPENAI_API_KEY",
  "GEMINI_API_KEY": "YOUR_GEMINI_API_KEY",
  "GROQ_API_KEY": "YOUR_GROQ_API_KEY",
  "DEFAULT_PROVIDER": "openai",
  "DEFAULT_TEMPERATURE": 0.7,
  "LOG_LEVEL": "info",
  "MCP_SERVER": true,
  "MCP_BRIDGE_ENABLED": true
}

Usage patterns and examples

- Create multiple ducks configured to different providers and query them in parallel to compare responses. - Use the Duck Council to gather responses from all configured ducks at once. - Run consensus voting to obtain a ranked decision with reasoning and confidence scores. - Engage in structured debates or iterative refinement to improve outputs.

Troubleshooting

If a provider isn’t responding or you encounter rate limiting, verify your API keys and endpoints, then check health status with the provided tools. Ensure local CLI agents or local runtimes required by certain providers are running and accessible. Review logs for detailed error messages to pinpoint misconfigurations.

Security and safety notes

Leverage guardrails to enforce rate limits, per-server token limits, and PII redaction. Use per-provider approvals for sensitive actions and monitor health checks to detect unhealthy providers. Apply session-based approvals to manage access to MCP features.

Examples of typical workflows

- Quick multi-duck query: ask the same question to multiple providers and view a side-by-side comparison. - Consensus voting: collect votes from ducks with explanations and confidence, then select the top-ranked response. - Iterative refinement: two ducks collaboratively improve a single answer through back-and-forth revisions.

Support and further reading

Explore the available prompts, tools, and prompts templates to structure multi-LLM workflows. Set up the required providers, enable MCP Apps for rich UIs, and refer to setup guides for provider-specific considerations.

Maintenance and updates

Keep dependencies up to date and monitor health checks for providers. Regularly review guardrails and usage tracking to understand costs and ensure compliant behavior across all ducks.

Available tools

ask_duck

Ask a single question to a specific LLM provider.

chat_with_duck

Conversation with context maintained across messages.

clear_conversations

Clear all stored conversation history.

list_ducks

List configured providers and their health status.

list_models

List available models for providers.

compare_ducks

Ask the same question to multiple providers in parallel.

duck_council

Collect and view responses from all configured ducks.

get_usage_stats

Show usage statistics and estimated costs per provider.

duck_vote

Multi-duck voting with reasoning and confidence scores.

duck_judge

Have one duck evaluate and rank the others’ responses.

duck_iterate

Iteratively refine a response between two ducks.

duck_debate

Structured multi-round debates between ducks.

mcp_status

Check MCP Bridge status and connected servers.

get_pending_approvals

See pending MCP tool approval requests.

approve_mcp_request

Approve or deny a duck's MCP tool request.