home / mcp / flyto core mcp server
Open-source workflow engine with 300+ atomic modules exposed as MCP tools for AI agents.
Configuration
View docs{
"mcpServers": {
"flytohub-flyto-core": {
"command": "python",
"args": [
"-m",
"core.mcp_server"
]
}
}
}Flyto Core is an open-source workflow automation engine designed for AI automation. It exposes hundreds of atomic modules that you can compose into reliable automation pipelines and access them via MCP, enabling AI agents and automation tools to discover, inspect, and execute capabilities directly on your machine or in your environment.
You connect an MCP client to the Flyto Core MCP server to discover available modules, inspect their parameters, and execute them as part of larger workflows. Your AI agents can list modules, fetch module information, and run specific actions without requiring glue code. Once the MCP server is running, you can start building YAML workflows that chain modules together, retry on failures, and handle errors gracefully.
Prerequisites you need before installation: Python 3.8 or newer, and Git. You may also choose to install a local package manager like pip, which is included with Python.
pip install flyto-coreIf you prefer to build from source, you can clone the repository, install dependencies, and run tests or examples.
git clone https://github.com/flytohub/flyto-core.git
cd flyto-core
pip install -r requirements.txt
python run.py workflows/_test/test_text_reverse.yamlFlyto Core can expose an MCP server that your MCP clients connect to for discovering and executing modules. You start the MCP server with a Python module invocation that runs the MCP server entry point.
python -m core.mcp_serverReverse the characters in a string and return the result.
Convert a string to uppercase.
Launch a browser instance for automation tasks.
Navigate the browser to a specified URL.
Extract information from a page using a selector.
Perform HTTP requests with built-in retry and error handling.
Chat with a local Ollama model for AI-assisted workflows.
Load and run AI models within a workflow context.
Store and retrieve memory for agents during task execution.
Handle vector-based memory for semantic search and retrieval.