home / mcp / claude-flow mcp server
Provides a production-ready MCP server to connect clients, route tasks, and coordinate 60+ agents with self-learning capabilities.
Configuration
View docs{
"mcpServers": {
"ruvnet-claude-flow": {
"command": "npx",
"args": [
"claude-flow@v3alpha",
"mcp",
"start"
],
"env": {
"OPENAI_API_KEY": "sk-...",
"ANTHROPIC_API_KEY": "sk-...",
"CLAUDE_FLOW_LOG_LEVEL": "info",
"CLAUDE_FLOW_TOOL_MODE": "develop",
"GOOGLE_GEMINI_API_KEY": "AIza...",
"CLAUDE_FLOW_TOOL_GROUPS": "implement,test,memory"
}
}
}
}Claude-Flowβs MCP Server enables clients to connect, control, and orchestrate a production-ready multi-agent flow. It exposes a stable interface for routing tasks through a fleet of 60+ specialized agents, coordinates swarms with fault-tolerant consensus, and supports self-learning memory to improve routing decisions over time. This MCP server can be started locally or wired into remote clients to empower enterprise-grade AI orchestration across diverse environments.
You interact with the MCP Server from your AI environment or Claude-Flow client by starting the MCP server process and then connecting your client to it. The server acts as a central controller for routing tasks to specialized agents, managing memory stores, and coordinating swarms with robust consensus.
Common usage patterns include starting a local MCP server for development, or connecting a remote client to a server that runs in the cloud or inside your private network. Once connected, you can request task routing, spawn agent swarms, inspect agent status, and trigger background workers that optimize memory, security, and performance.
In practice, you typically perform these steps: start the MCP server, configure your client with the MCP server, submit a task description, and receive a structured result from the broker that routes to the best agents. The system automatically leverages multiple LLM providers, memory stores, and training loops to improve routing over time.
Prerequisites: you need Node.js and a compatible package manager installed on your system.
Step-by-step commands you can run to install and start the MCP Server locally:
# One-line install (recommended)\ncurl -fsSL https://cdn.jsdelivr.net/gh/ruvnet/claude-flow@main/scripts/install.sh | bash\n\n# Or full setup with MCP + diagnostics (if you want the diagnostics and full stack)\ncurl -fsSL https://cdn.jsdelivr.net/gh/ruvnet/claude-flow@main/scripts/install.sh | bash -s -- --full\n\n# Start MCP server (local stdio transport)\nnpx claude-flow@v3alpha mcp startThe MCP server connects with clients over a transport you choose. For local development, the standard approach is to run the MCP server in the same environment as your client via stdio transport. For remote operation, you can run the MCP server with HTTP transport and expose a URL for your client to connect, enabling centralized orchestration across machines.
Environment variables and runtime options allow you to tune providers, memory backends, and swarm behavior. You may configure multiple LLM providers with automatic failover, adjust HNSW vectors for memory and search, and set up background workers to continuously audit security, optimize performance, and learn from results.
Security features are built into the MCP Server to guard against prompt injection, data leakage, and unauthorized actions. You should enable input validation, credential handling, and safe command execution in your deployment configuration. The server supports multiple transport mechanisms, pluggable providers, and a robust event-driven model for monitoring and maintenance.
For production deployments, use a remote MCP endpoint with proper authentication and rate limiting. Consider enabling flow monitoring, frequent health checks, and automated backup of memory stores to ensure resilience and auditability.
If the MCP server fails to start, verify that the requested port is not in use, check logs for the exact error, and ensure the MCP process has permission to bind to the port. If a client cannot connect, confirm the MCP transport configuration and network reachability between client and server.
Low throughput or high latency often indicates tight memory pressure or misconfigured HNSW or routing. Review memory settings, reduce concurrent agents if needed, and inspect background workers for any blocked tasks. You can run diagnostics to verify memory health and provider connectivity.
To get the most from Claude-Flow MCP, enable self-learning hooks, configure multiple LLM providers with automatic failover, and use memory-backed reasoning to route tasks efficiently.
This MCP server is designed for enterprise-grade AI orchestration with fault-tolerant swarm coordination, self-learning routing, and secure handling of credentials and prompts. It integrates tightly with the Claude-Flow ecosystem, enabling seamless tooling access and scalable, resilient operation.
Start the MCP server using the specified transport (stdio or http) to expose MCP endpoints for clients.
List available MCP endpoints and status.
List MCP tools available to clients.
Invoke an MCP tool with arguments.
List MCP server resources and state.
Read content from MCP server resources.
List prompt templates and argument bindings.
Get a prompt with given arguments.
Query status of running tasks.
Cancel a running task.
Auto-complete command arguments.