home / mcp / omnitaskagent mcp server

OmniTaskAgent MCP Server

Provides a multi-model task management MCP server with task creation, decomposition, analysis, and cross-system workflow integration.

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "acnet-ai-omnitaskagent": {
      "command": "python",
      "args": [
        "/path/to/run_mcp.py"
      ],
      "env": {
        "LLM_MODEL": "gpt-4o",
        "MAX_TOKENS": "4000",
        "TEMPERATURE": "0.2",
        "OPENAI_API_KEY": "your-key-here",
        "ANTHROPIC_API_KEY": "your-anthropic-key"
      }
    }
  }
}

OmniTaskAgent MCP Server provides a Python-based, multi-model task management backend that you can connect to via an MCP client. It enables you to manage tasks, decompose complex work, analyze project complexity, and integrate with multiple task systems through a standardized MCP interface.

How to use

Connect to the OmniTaskAgent MCP Server from your MCP client using the stdio configuration described below. Start the local MCP service and point your editor or client to the provided command so you can exchange tasks, run analyses, and orchestrate workflows.

How to install

Prerequisites you need before installation: Python 3.11 or newer, Node.js for MCP tooling, and a basic shell environment.

uv pip install -e .
pip install -e .
npm install

Editor Integration (MCP Service)

To integrate with an editor (for example Cursor or VSCode), run the MCP server locally and configure your editor to spawn the MCP client as shown.

# Start STDIO-based MCP service
python run_mcp.py

# Example editor config snippet
{
  "mcpServers": {
    "task_master_agent": {
      "type": "stdio",
      "command": "/path/to/python",
      "args": ["/path/to/run_mcp.py"],
      "env": {
        "OPENAI_API_KEY": "your-key-here"
      }
    }
  }
}

Configuration and runtime behavior

The MCP server supports environment variables for API keys and model configuration. You configure these values to enable OpenAI, Anthropic, or other model providers and adjust runtime behavior such as model temperature and maximum tokens.

Usage examples and capabilities

You can create, list, update, and delete tasks, decompose tasks into subtasks, analyse project complexity, and parse product requirements automatically. The server is designed to work with multiple task systems and can guide you to the most suitable tool for each scenario.

Troubleshooting and notes

If you encounter issues starting the MCP service, ensure the Python path is correct and that the run_mcp.py script is accessible. Verify that environment variables like OPENAI_API_KEY are set in your editor configuration and that the MCP client is reading the stdio channel correctly.

Tools and endpoints

The server provides a set of task management capabilities as described in the feature set, including task creation, listing, updating, deletion, decomposition, and analytical workflows. Use these tools through your MCP client to orchestrate tasks across connected systems.

Available tools

Task management

Create, list, update, and delete tasks with status tracking and dependency management.

Task decomposition

Break down complex tasks into subtasks and analyze for planning and PRD parsing.

Model integration

Support for multiple models (OpenAI, Anthropic, Claude, etc.) and cross-model workflows.

Editor integration

MCP-based integration with editors for smooth development workflows.

Intelligent workflow

LangGraph-based ReAct-style intelligent task management.

Multi-system integration

Connect to various task management systems and consolidate workflows.