Home / MCP / Sequential Thinking MCP Server
MCP server enabling multi-agent sequential thinking with optional web research via ExaTools.
Configuration
View docs{
"mcpServers": {
"sequential-thinking": {
"command": "mcp-server-mas-sequential-thinking",
"args": [],
"env": {
"LLM_PROVIDER": "deepseek",
"DEEPSEEK_API_KEY": "your_api_key",
"EXA_API_KEY": "your_exa_key_optional"
}
}
}
}You run a background MCP server that empowers your LLM client with a sophisticated multi-agent sequential thinking process. It orchestrates six specialized agents to analyze problems from multiple angles, then synthesizes their insights into actionable guidance. This enables deeper reasoning, structured problem solving, and more robust results for complex tasks handled by your LLM workflows.
Use the sequential thinking MCP server to process complex thoughts initiated by your LLM client. The system routes your problem to a team of specialized agents, runs parallel analyses where appropriate, and then combines their findings into a coherent, actionable answer. You can leverage single-agent processing for straightforward questions or scale up to full multi-agent sequences for deeper exploration.
Prerequisites include Python 3.10 or newer and a compatible LLM provider key. You also need a way to run the MCP server locally, such as uv or Python directly.
# Quick start with a local install using UV (recommended if you have UV installed)
uv pip install .
# Or install with Python directly
pip install .Configure the MCP client to connect to the sequential-thinking server using the provided MCP config example. You can supply environment variables to control the LLM provider, API keys for web research, and optional EXA access for ExaTools.
{
"mcpServers": {
"sequential-thinking": {
"command": "mcp-server-mas-sequential-thinking",
"env": {
"LLM_PROVIDER": "deepseek",
"DEEPSEEK_API_KEY": "your_api_key",
"EXA_API_KEY": "your_exa_key_optional"
}
}
}
}ExaTools research is optional and requires EXA_API_KEY. The system can operate with or without it. The Enhanced Model is used for synthesis, while individual agents run on the Standard Model unless otherwise configured.
Keep API keys and tokens secret. Use separate keys for development and production environments. Monitor token usage, as multi-agent processing can consume more tokens and incur higher costs. Limit access to the MCP server to trusted clients.
If the server fails to start, verify Python 3.10+ is installed, the chosen LLM provider keys are valid, and the environment variables are correctly set. Check that the MCP client is configured to point to the running server and that the port is accessible.
MCP tool that processes thoughts through a multi-agent system, routing to specialized agents and returning a synthesized analysis
Web research capabilities used by select agents to fetch current facts, counterexamples, success stories, and innovations