Sequential Thinking Multi-Agent System MCP server

Orchestrates a team of specialized agents working in parallel to break down complex problems through structured thinking steps, enabling multi-disciplinary analysis with greater depth than single-agent approaches.
Back to servers
Provider
Frad Lee
Release date
Apr 03, 2025
Language
Python
Package
Stats
2.9K downloads
134 stars

This MCP (Model Context Protocol) server implements a sophisticated sequential thinking process using multiple specialized AI agents working together to solve complex problems. It provides a more advanced approach compared to simpler state-tracking methods by utilizing coordinated agents for deeper analysis.

Installation

Prerequisites

  • Python 3.10+
  • Access to a compatible LLM API (select one):
    • Groq (requires GROQ_API_KEY)
    • DeepSeek (requires DEEPSEEK_API_KEY)
    • OpenRouter (requires OPENROUTER_API_KEY)
  • Exa API Key (required only if using the Researcher agent)
  • uv package manager (recommended) or pip

Installing via Smithery

For automatic installation via Smithery for Claude Desktop:

npx -y @smithery/cli install @FradSer/mcp-server-mas-sequential-thinking --client claude

Manual Installation

  1. Clone the repository:

    git clone [email protected]:FradSer/mcp-server-mas-sequential-thinking.git
    cd mcp-server-mas-sequential-thinking
    
  2. Set up environment variables in a .env file:

    # Select LLM provider (default is "deepseek")
    LLM_PROVIDER="deepseek"
    
    # API key for chosen provider
    DEEPSEEK_API_KEY="your_deepseek_api_key"
    
    # Only if using Exa for research capabilities
    EXA_API_KEY="your_exa_api_key"
    
  3. Install dependencies:

    # Create and activate virtual environment
    python -m venv .venv
    source .venv/bin/activate  # On Windows: .venv\Scripts\activate
    
    # Using uv (recommended)
    uv pip install -r requirements.txt
    
    # Or using pip
    pip install -r requirements.txt
    

Configuration

To configure this server with your MCP client, add it to your client's configuration:

{
  "mcpServers": {
      "mas-sequential-thinking": {
         "command": "uvx",
         "args": [
            "mcp-server-mas-sequential-thinking"
         ],
         "env": {
            "LLM_PROVIDER": "deepseek",
            "DEEPSEEK_API_KEY": "your_deepseek_api_key",
            "EXA_API_KEY": "your_exa_api_key"
         }
      }
   }
}

Running the Server

Start the server using one of these methods:

# Using uv (recommended)
uv --directory /path/to/mcp-server-mas-sequential-thinking run mcp-server-mas-sequential-thinking

# Or directly with Python
python main.py

The server will listen for requests via stdio, making the sequentialthinking tool available to compatible MCP clients.

Using the Sequential Thinking Tool

Tool Parameters

The tool accepts parameters according to the ThoughtData model:

thought: str                 # Content of the current thought/step
thoughtNumber: int           # Sequence number (>=1)
totalThoughts: int           # Estimated total steps (>=1, suggest >=5)
nextThoughtNeeded: bool      # Is another step required after this?
isRevision: bool = False     # Is this revising a previous thought?
revisesThought: Optional[int] = None # If isRevision, which thought number?
branchFromThought: Optional[int] = None # If branching, from which thought?
branchId: Optional[str] = None # Unique ID for the new branch being created
needsMoreThoughts: bool = False # Signal if estimate is too low before last step

How It Works

  1. An LLM uses a starter prompt to define the problem and starts the process
  2. The LLM calls the sequentialthinking tool with the first thought
  3. The Multi-Agent System processes the thought:
    • The Coordinator (Team) analyzes the input and delegates subtasks
    • Specialist agents (Planner, Researcher, Analyzer, Critic, Synthesizer) handle specific aspects
    • The Coordinator synthesizes results and provides guidance
  4. The tool returns a JSON response containing the synthesized analysis and guidance
  5. The LLM formulates the next thought based on this guidance
  6. The process continues iteratively until completion

Response Format

The tool returns a JSON response containing:

{
  "processedThoughtNumber": 1,
  "estimatedTotalThoughts": 5,
  "nextThoughtNeeded": true,
  "coordinatorResponse": "Analysis of the thought...",
  "branches": ["main"],
  "thoughtHistoryLength": 1,
  "branchDetails": {
    "currentBranchId": "main",
    "branchOriginThought": null,
    "allBranches": {
      "main": 1
    }
  },
  "isRevision": false,
  "revisesThought": null,
  "isBranch": false,
  "status": "success",
  "error": null
}

Advanced Features

Branching and Revisions

The tool supports complex thought patterns:

  • Revisions: Correct or improve previous thoughts by setting isRevision: true and revisesThought: N
  • Branching: Explore alternative paths by setting branchFromThought: N and providing a branchId

External Research

When configured with an Exa API key, the Researcher agent can gather information from external sources to enhance the thinking process.

Token Usage Warning

⚠️ This Multi-Agent System consumes significantly more tokens than single-agent alternatives due to its architecture. Each call potentially involves multiple specialist agents working in parallel, leading to 3-6x higher token usage compared to simpler approaches. The system prioritizes analysis depth and quality over token efficiency.

How to add this MCP server to Cursor

There are two ways to add an MCP server to Cursor. The most common way is to add the server globally in the ~/.cursor/mcp.json file so that it is available in all of your projects.

If you only need the server in a single project, you can add it to the project instead by creating or adding it to the .cursor/mcp.json file.

Adding an MCP server to Cursor globally

To add a global MCP server go to Cursor Settings > MCP and click "Add new global MCP server".

When you click that button the ~/.cursor/mcp.json file will be opened and you can add your server like this:

{
    "mcpServers": {
        "cursor-rules-mcp": {
            "command": "npx",
            "args": [
                "-y",
                "cursor-rules-mcp"
            ]
        }
    }
}

Adding an MCP server to a project

To add an MCP server to a project you can create a new .cursor/mcp.json file or add it to the existing one. This will look exactly the same as the global MCP server example above.

How to use the MCP server

Once the server is installed, you might need to head back to Settings > MCP and click the refresh button.

The Cursor agent will then be able to see the available tools the added MCP server has available and will call them when it needs to.

You can also explictly ask the agent to use the tool by mentioning the tool name and describing what the function does.

Want to 10x your AI skills?

Get a free account and learn to code + market your apps using AI (with or without vibes!).

Nah, maybe later