Home / MCP / Blender MCP Server
Controls Blender with local AI models via MCP, enabling natural-language prompts for modeling tasks and renders.
Configuration
View docs{
"mcpServers": {
"blender_http": {
"url": "http://0.0.0.0:8000"
}
}
}You can drive Blender with local AI models through MCP, enabling natural-language prompts to perform 3D tasks, manage scenes, and render outputs. This setup lets you run a local Ollama model, connect it to Blender via a dedicated MCP server, and use a Blender add-on to communicate seamlessly with the AI-driven workflow.
Use an MCP client to send prompts that control Blender and query scene information. You can create objects, modify materials, render images, and download PolyHaven assets directly through prompts or tool actions. Start the MCP server, then run the client commands to issue actions such as retrieving scene details, creating objects, applying materials, and triggering renders.
Prerequisites to get started:
git clone https://github.com/dhakalnirajan/blender-open-mcp.git
cd blender-open-mcpCreate and activate a Python virtual environment using uv and install the package in editable mode.
uv venv
source .venv/bin/activate # Linux/macOS
.venv\Scripts\activate # Windowsuv pip install -e .Install the Blender add-on by loading the provided addon file into Blender. In Blender, enable the add-on in the Preferences.
# Steps inside Blender UI
# Edit -> Preferences -> Add-ons
# Install... and select addon.py from blender-open-mcp directory
# Enable the Blender MCP add-onPrepare an Ollama model in advance. If you need to pull a model, you can run a command to ensure the model is available.
ollama run llama3.2 # or another supported model like Gemma3Start the Ollama server in the background, then start the MCP server and finally enable communication with Blender through the add-on.
Start the MCP server (HTTP endpoint by default at port 8000):
blender-mcp
# or with explicit host/port and Ollama settings
blender-mcp --host 127.0.0.1 --port 8001 --ollama-url http://localhost:11434 --ollama-model llama3.2Alternatively, run the Python-based server directly:
python src/blender_open_mcp/server.pyOpen Blender, access the 3D Viewport, and start the integrated MCP server from the Blender MCP panel.
# In Blender UI: N key to open the sidebar, locate the Blender MCP panel, and click Start MCP ServerInteract with the server using the mcp client to issue prompts and tool commands. You can perform basic operations, query scene data, and render outputs. Use prompts to create objects, apply materials, or trigger renders, and rely on the tool list to perform more advanced actions.
mcp prompt "Create a cube named 'my_cube'." --host http://localhost:8000
mcp tool get_scene_info --host http://localhost:8000
mcp prompt "Render the image." --host http://localhost:8000Optionally, download PolyHaven assets through prompts when enabled in MCP. You can request textures, HDRIs, or models directly into your Blender scene.
mcp prompt "Download a texture from PolyHaven." --host http://localhost:8000If you encounter issues, verify that Ollama and the MCP server are running, check the Blender add-on settings, review the startup arguments, and inspect logs for error messages.
Retrieves details about the current scene, including objects, camera, and render settings.
Retrieves information about a specific object by name.
Creates a 3D object with type, name, location, rotation, and scale.
Modifies an object's properties such as location, rotation, scale, and visibility.
Deletes an object by name.
Assigns a material to an object, including color customization.
Renders an image to a file path.
Executes Python code inside Blender to perform custom actions.
Lists PolyHaven asset categories available for browsing.
Searches PolyHaven assets by type and categories.
Downloads a PolyHaven asset with specified type and resolution.
Applies a downloaded texture to a specified object.
Sets the Ollama model to use for prompts.
Sets the Ollama server URL used for prompts.
Lists available Ollama models.