home / mcp / fusion360 mcp server
fusion360-mcp
Configuration
View docs{
"mcpServers": {
"jaskirat1616-fusion360-mcp": {
"url": "http://127.0.0.1:9000/mcp",
"headers": {
"CLAUDE_API_KEY": "sk-antic-XXXXXXXX",
"GEMINI_API_KEY": "AIzaXXXXXXXXXXXXXXXX",
"OPENAI_API_KEY": "sk-XXXXXXXXXXXX"
}
}
}
}You run an MCP server that lets Autodesk Fusion 360 talk to multiple AI backends to generate and validate parametric CAD actions from natural language. It handles routing, validation, and state persistence so you can design with AI safely and efficiently.
Install and run the MCP server, then connect Fusion 360 via the FusionMCP add-in. You can issue natural language prompts such as creating a box, designing mounting features, or defining a shaft. The server will route requests to available AI providers, validate outputs, and translate them into structured CAD actions for Fusion 360.
# Prerequisites
- Python 3.11+
- Autodesk Fusion 360 (2025 version recommended)
- At least one LLM provider (Ollama local, OpenAI, Gemini, Claude)
# Step 1: Install Python dependencies
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -r requirements.txt
# Step 2: Run the MCP server
python -m mcp_server.serverThe server exposes a REST API and a local HTTP endpoint. You can configure which AI providers to use, set fallback behavior, and enable caching. Typical configuration includes provider API keys, host/port settings, and cache preferences.
Configuration is described in the example config and includes keys for provider URLs, API keys, default model selections, fallback chains, server binding, and caching options. You can enable a JSON-based cache (json or sqlite) and control timeouts and retry behavior.
If the server wonβt start, ensure the port isnβt already in use or adjust mcp_port in the configuration. For add-in visibility, verify the Fusion 360 add-in is in the correct AddIns folder, restart Fusion 360, and re-run the add-in. Check API key permissions and ensure Ollama or other local/offline providers are running if you rely on them.
The MCP server is a FastAPI app that routes requests to LLM clients, validates responses, and caches conversation and design state. It supports multiple providers in a fallback chain and uses a system prompt to guide JSON-formatted actions that Fusion 360 can execute.
- Create a 20 mm cube by issuing a natural-language command. The server converts this into a sequence of CAD actions such as create_box, with safety checks for dimensions and units.
Execute an MCP command by sending a JSON payload with provider, model, prompt, and context. Returns generated actions and LLM response.
Health check that lists available providers and cache status.
List available models by provider and version.
Fetch the conversation history and past actions for context.