home / mcp / comfyui mcp server

ComfyUI MCP Server

Exposes dynamically loaded ComfyUI workflows as executable tools for large language models with parameter mapping and WebSocket progress updates.

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "yarnovo-comfyui-mcp": {
      "command": "npx",
      "args": [
        "/absolute/path/to/comfyui-mcp"
      ],
      "env": {
        "BYPASS_PROXY": "true",
        "COMFYUI_HOST": "172.22.240.1",
        "COMFYUI_PORT": "8000",
        "WORKFLOWS_DIR": "/absolute/path/to/comfyui-mcp/workflows"
      }
    }
  }
}

You run a ComfyUI MCP server to dynamically load and expose ComfyUI workflows as tools for a large language model. This server scans your workflows folder, registers each workflow as a dedicated tool, and reports progress in real time over WebSocket. It’s designed to work in environments including WSL2 and supports custom output directories and flexible parameter mappings.

How to use

You use an MCP client to connect to the ComfyUI MCP Server, load workflows, map input parameters to workflow descriptors, and execute workflows as tools from your chat or automation pipeline. Each workflow becomes a tool with a name like run_text_to_image_workflow_... that accepts structured inputs and returns results when the workflow finishes. You can specify an output directory per tool, or rely on the default outputs folder.

How to install

Prerequisites you need on your machine: Node.js, NPM, and a basic shell environment. You also need access to the workflow folder you want to expose.

# 1. Install dependencies for the MCP project
npm install

# 2. Build the MCP project
npm run build

# 3. Note the absolute path to the MCP project directory
pwd  # e.g. /home/username/comfyui-mcp

Additional setup and configuration

Configure the MCP server inputs and how it talks to ComfyUI. The workflow folder path and server host/port are provided via environment variables. If you are using WSL2, a script can help detect the Windows host IP and you may bypass the system proxy for seamless local access.

Create a local environment file and set these values accordingly.

# Create config file with defaults
cp .env.example .env

# Edit the environment file to point to your workflows and server
WORKFLOWS_DIR=./workflows
COMFYUI_HOST=172.22.240.1  # Windows host IP in WSL2 setups
COMFYUI_PORT=8000
BYPASS_PROXY=true

Connecting from Claude Desktop (example configuration)

You configure Claude Desktop to run the MCP server locally. Use an absolute path to your ComfyUI MCP project and forward the necessary environment variables.

{
  "mcpServers": {
    "comfyui": {
      "command": "npx",
      "args": ["/absolute/path/to/comfyui-mcp"],
      "env": {
        "WORKFLOWS_DIR": "/absolute/path/to/comfyui-mcp/workflows",
        "COMFYUI_HOST": "172.22.240.1",
        "COMFYUI_PORT": "8000",
        "BYPASS_PROXY": "true"
      }
    }
  }
}

Testing the connection

After starting, test the connection to verify that the MCP server is reachable and can enumerate workflows.

node test-connection.js

Using a custom output directory

All tools support an optional output_dir parameter to save generated results to a specific folder. If you omit output_dir, results are saved in the default outputs directory.

使用 run_text_to_image_workflow_image_qwen_image:
  prompt: "山水画,高清,细腻笔触"
  output_dir: "/home/username/my_images"

Available tools

run_text_to_image_workflow_image_qwen_image

Text-to-image workflow using Qwen visual language model to generate images from prompts, supports bilingual prompts and returns generated image results.

run_image_to_image_workflow_omnigen2_image_edit

Image-to-image editing with OmniGen2, enabling sophisticated modifications and style changes based on input image and prompts.

run_image_to_image_workflow_qwen_image_edit

Qwen-based image editing tool for intelligent image modifications guided by textual prompts.

run_image_to_image_workflow_qwen_image_controlnet_patch

Qwen with ControlNet Patch for precise control over generated edits.

run_image_to_image_workflow_qwen_image_instantx_controlnet

Qwen with InstantX for deep control over image transformations.

run_image_to_image_workflow_qwen_image_union_control

Qwen with Union Control for advanced boundary and edge control in edits.

run_image_to_image_workflow_rmbg_multiple_models

Background removal using multiple models to achieve clean separation.

run_image_to_image_workflow_yolo_cropper

YOLO-based object detection and cropping for targeted edits.

run_image_to_video_workflow_wan2_2_14b_i2v

WAN 2.2 14B image-to-video generation producing video from a still image.

run_image_to_video_workflow_wan2_2_14b_flf2v

WAN 2.2 14B first-and-last-frame interpolation to create smooth video.

run_text_to_video_workflow_wan2_2_14b_t2v

WAN 2.2 14B text-to-video generation.

run_text_to_audio_workflow_audio_ace_step_1_t2a_instrumentals

ACE instrumental music generation from text prompts.

run_text_to_audio_workflow_audio_ace_step_1_t2a_song

ACE song generation with lyrics from text prompts.

run_audio_to_audio_workflow_audio_ace_step_1_a2a_editing

ACE audio editing and transformation endpoints.