home / mcp / meta prompt mcp server

Meta Prompt MCP Server

Turn any MCP Client into a "multi-agent" system (via prompting)

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "tisu19021997-meta-prompt-mcp-server": {
      "command": "uv",
      "args": [
        "--directory",
        "path/to/your/meta-prompt-mcp",
        "run",
        "mcp-meta-prompt"
      ]
    }
  }
}

You can run Meta Prompt MCP as a single-model multi-agent workflow, where the Conductor guides tasks and Expert-like subagents perform subtasks all within one model session. This server orchestrates dynamic problem solving by structuring tasks, delegating subproblems, and integrating expert input to produce robust results efficiently.

How to use

To use the Meta Prompt MCP server with a client, start the MCP runtime and then engage the workflow by sending your prompt to the designated entry point. Begin your request by invoking the meta_model_prompt, fill in your query, and then provide your problem statement. The Conductor will analyze the task, break it into subtasks, and consult expert-style roles to execute each part. You will receive a single, consolidated response that reflects the Conductor’s plan and the integrated expert outputs.

When you want to run a new prompt, ensure your client is configured to connect through the MCP runtime using the correct working directory where the MCP server is installed. The workflow is designed to be self-contained and does not require you to manage multiple separate model instances.

How to install

Prerequisites you need before getting started are: you must have a modern operating system with a shell, Python installed (for some tooling), and the uv package manager available on your system.

git clone https://github.com/tisu19021997/meta-prompt-mcp-server.git .
cd meta-prompt-mcp-server

Install uv, the fast Python package manager used to run MCP processes. Choose the command that matches your platform.

curl -LsSf https://astral.sh/uv/install.sh | sh  # macOS / Linux
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"  # Windows PowerShell

Additional notes and configuration

Configure your client to connect to the MCP server by using the stdio command flow described below. The server is started with the uv runtime and points to your local MCP directory where the meta-prompt workflow is defined.

Example client configuration (stdio). This config runs the MCP workflow from your local setup and uses the meta-prompt-mcp directory with the mcp-meta-prompt entry point.

{
  "type": "stdio",
  "name": "meta_prompt_mcp",
  "command": "uv",
  "args": [
    "--directory",
    "path/to/your/meta-prompt-mcp",
    "run",
    "mcp-meta-prompt"
  ]
}

Security and reliability notes

Operate the MCP server in trusted environments. The Conductor and Expert roles are simulated within a single model call, providing efficiency, but you should verify outputs for high-stakes decisions. If your deployment requires stricter separation of concerns, consider enabling independent expert evaluation in your broader pipeline.

If you encounter issues with the expert simulation, remember that a fallback is used when an independent expert call isn’t available in your client. In that case, the final output reflects the conductor’s generated content.

Troubleshooting tips

If the MCP workflow does not start, confirm that the working directory path is correct and that uv is installed and accessible in your system PATH. Re-run the configuration step to ensure the correct directory and entry point are referenced.

If you need to restart the workflow, stop the current process and re-invoke the meta_model_prompt entry point with your updated prompt.

Available tools

meta_model_prompt

Official entry point that activates the Conductor/Expert MCP workflow by submitting the initial prompt.

expert_model

Simulated expert subagent that the Conductor consults for subtask execution; may be unavailable in some clients and fall back to conductor output.