Sequential Thinking Multi-Agent System MCP server

Orchestrates a team of specialized agents working in parallel to break down complex problems through structured thinking steps, enabling multi-disciplinary analysis with greater depth than single-agent approaches.
Back to servers
Setup instructions
Provider
Frad Lee
Release date
Apr 03, 2025
Language
Python
Package
Stats
4.7K downloads
237 stars

The Sequential Thinking Multi-Agent System is an advanced tool that implements a sophisticated thought process using a team of specialized AI agents working collaboratively. This MCP server enables complex problem-solving through coordinated analysis, research, and synthesis capabilities.

Installation

Using Smithery (Recommended for Claude Desktop)

Install automatically via Smithery:

npx -y @smithery/cli install @FradSer/mcp-server-mas-sequential-thinking --client claude

Manual Installation

  1. Clone the repository:

    git clone [email protected]:FradSer/mcp-server-mas-sequential-thinking.git
    cd mcp-server-mas-sequential-thinking
    
  2. Set up environment variables in a .env file:

    # LLM Configuration
    LLM_PROVIDER="deepseek"  # Options: "deepseek", "groq", "openrouter", "github", "ollama", "kimi"
    DEEPSEEK_API_KEY="your_deepseek_api_key"
    
    # Only set the API key for your chosen provider:
    # GROQ_API_KEY="your_groq_api_key"
    # OPENROUTER_API_KEY="your_openrouter_api_key"  # Required for "openrouter" and "kimi"
    # GITHUB_TOKEN="ghp_your_github_personal_access_token"  # Required for "github"
    
    # Optional: for research capabilities
    EXA_API_KEY="your_exa_api_key"
    
  3. Install dependencies (using a virtual environment is recommended):

    # Create and activate a virtual environment
    python -m venv .venv
    source .venv/bin/activate  # On Windows use `.venv\Scripts\activate`
    
    # Install with uv (recommended)
    uv pip install -r requirements.txt
    
    # Or using pip
    pip install -r requirements.txt
    

MCP Server Configuration

Add the server to your MCP client configuration:

{
  "mcpServers": {
    "mas-sequential-thinking": {
      "command": "uvx",
      "args": [
        "mcp-server-mas-sequential-thinking"
      ],
      "env": {
        "LLM_PROVIDER": "deepseek",
        "DEEPSEEK_API_KEY": "your_deepseek_api_key",
        "EXA_API_KEY": "your_exa_api_key"
      }
    }
  }
}

GitHub Models Configuration

If you prefer using GitHub Models (which provides access to OpenAI GPT models):

{
  "mcpServers": {
    "mas-sequential-thinking": {
      "command": "uvx",
      "args": [
        "mcp-server-mas-sequential-thinking"
      ],
      "env": {
        "LLM_PROVIDER": "github",
        "GITHUB_TOKEN": "ghp_your_github_personal_access_token",
        "GITHUB_TEAM_MODEL_ID": "openai/gpt-5",
        "GITHUB_AGENT_MODEL_ID": "openai/gpt-5-min",
        "EXA_API_KEY": "your_exa_api_key"
      }
    }
  }
}

Running the Server

With environment variables set and dependencies installed:

# Using uv (recommended)
uv --directory /path/to/mcp-server-mas-sequential-thinking run mcp-server-mas-sequential-thinking

# Or using Python directly
python main.py

The server will start and listen for requests via stdio.

Using the Sequential Thinking Tool

Tool Parameters

The sequentialthinking tool accepts the following parameters:

  • thought: Content of the current thought/step
  • thoughtNumber: Sequence number (≥1)
  • totalThoughts: Estimated total steps (≥1, suggest ≥5)
  • nextThoughtNeeded: Is another step required after this?
  • isRevision: Is this revising a previous thought? (default: false)
  • revisesThought: If revising, which thought number?
  • branchFromThought: If branching, from which thought?
  • branchId: Unique ID for the new branch being created
  • needsMoreThoughts: Signal if estimate is too low before last step (default: false)

Tool Response Format

The tool returns a JSON response containing:

{
  "processedThoughtNumber": 1,
  "estimatedTotalThoughts": 5,
  "nextThoughtNeeded": true,
  "coordinatorResponse": "Analysis of your thought...",
  "branches": ["main"],
  "thoughtHistoryLength": 1,
  "branchDetails": {
    "currentBranchId": "main",
    "branchOriginThought": null,
    "allBranches": {
      "main": 1
    }
  },
  "isRevision": false,
  "revisesThought": null,
  "isBranch": false,
  "status": "success",
  "error": null
}

Testing Locally

You can test the server locally using MCP Inspector:

npx @modelcontextprotocol/inspector uv run main.py

Then visit http://127.0.0.1:6274/ and verify the sequentialthinking tool is available.

Token Usage Warning

⚠️ This tool consumes significantly more tokens than single-agent alternatives due to its Multi-Agent System architecture. Each call invokes multiple specialist agents (Planner, Researcher, Analyzer, Critic, Synthesizer) in addition to the Coordinator agent. Token usage may be 3-6x higher per thought step compared to simpler approaches.

Logging

Logs are written to ~/.sequential_thinking/logs/sequential_thinking.log by default, including detailed information about the thought processing flow and agent interactions.

How to install this MCP server

For Claude Code

To add this MCP server to Claude Code, run this command in your terminal:

claude mcp add-json "mas-sequential-thinking" '{"command":"uvx","args":["mcp-server-mas-sequential-thinking"],"env":{"LLM_PROVIDER":"deepseek","DEEPSEEK_API_KEY":"your_deepseek_api_key","EXA_API_KEY":"your_exa_api_key"}}'

See the official Claude Code MCP documentation for more details.

For Cursor

There are two ways to add an MCP server to Cursor. The most common way is to add the server globally in the ~/.cursor/mcp.json file so that it is available in all of your projects.

If you only need the server in a single project, you can add it to the project instead by creating or adding it to the .cursor/mcp.json file.

Adding an MCP server to Cursor globally

To add a global MCP server go to Cursor Settings > Tools & Integrations and click "New MCP Server".

When you click that button the ~/.cursor/mcp.json file will be opened and you can add your server like this:

{
    "mcpServers": {
        "mas-sequential-thinking": {
            "command": "uvx",
            "args": [
                "mcp-server-mas-sequential-thinking"
            ],
            "env": {
                "LLM_PROVIDER": "deepseek",
                "DEEPSEEK_API_KEY": "your_deepseek_api_key",
                "EXA_API_KEY": "your_exa_api_key"
            }
        }
    }
}

Adding an MCP server to a project

To add an MCP server to a project you can create a new .cursor/mcp.json file or add it to the existing one. This will look exactly the same as the global MCP server example above.

How to use the MCP server

Once the server is installed, you might need to head back to Settings > MCP and click the refresh button.

The Cursor agent will then be able to see the available tools the added MCP server has available and will call them when it needs to.

You can also explicitly ask the agent to use the tool by mentioning the tool name and describing what the function does.

For Claude Desktop

To add this MCP server to Claude Desktop:

1. Find your configuration file:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
  • Linux: ~/.config/Claude/claude_desktop_config.json

2. Add this to your configuration file:

{
    "mcpServers": {
        "mas-sequential-thinking": {
            "command": "uvx",
            "args": [
                "mcp-server-mas-sequential-thinking"
            ],
            "env": {
                "LLM_PROVIDER": "deepseek",
                "DEEPSEEK_API_KEY": "your_deepseek_api_key",
                "EXA_API_KEY": "your_exa_api_key"
            }
        }
    }
}

3. Restart Claude Desktop for the changes to take effect

Want to 10x your AI skills?

Get a free account and learn to code + market your apps using AI (with or without vibes!).

Nah, maybe later