home / mcp / openrouter mcp multimodal server
MCP server for OpenRouter providing text chat and image analysis tools
Configuration
View docs{
"mcpServers": {
"stabgan-openrouter-mcp-multimodal": {
"command": "npx",
"args": [
"-y",
"@stabgan/openrouter-mcp-multimodal"
],
"env": {
"DEFAULT_MODEL": "qwen/qwen2.5-vl-32b-instruct:free",
"OPENROUTER_API_KEY": "your-api-key-here"
}
}
}
}You will run an MCP server that combines text chat with multimodal image analysis, powered by OpenRouter.ai. This server lets you chat with a wide range of models, ask image-related questions, and manage model choices efficiently, all with practical, production-ready configuration options.
You interact with the server through an MCP client to start conversations with OpenRouter.ai models and optionally attach images for analysis. Use it to run simple text chats or multimodal sessions where you send text alongside images. You can search for, validate, and select models, customize parameters like temperature, and rely on automatic retry and rate-limiting handling to keep conversations flowing. For image analysis, you can provide a single image or multiple images, ask specific questions, and benefit from automatic image resizing and optimization.
Common usage patterns include starting a chat session with a chosen model, continuing the conversation with context, and, when needed, introducing images with explicit questions such as “What’s in this image?” or “Describe the objects detected.” The server supports both plain text and multimodal messages, enabling you to build richer interactions in your applications.
Prerequisites: ensure you have Node.js 18 or later installed on your system. You also need an OpenRouter API key to access model capabilities.
Option A: Install via npm globally
npm install -g @stabgan/openrouter-mcp-multimodalOption B: Run via Docker
docker run -i -e OPENROUTER_API_KEY=your-api-key-here stabgandocker/openrouter-mcp-multimodal:latestConfigure the MCP server in your MCP settings file. You can choose between several options to run the server locally with npx, uvx, Docker, or Smithery.
Option 1: Using npx (Node.js)
{
"mcpServers": {
"openrouter": {
"command": "npx",
"args": [
"-y",
"@stabgan/openrouter-mcp-multimodal"
],
"env": {
"OPENROUTER_API_KEY": "your-api-key-here",
"DEFAULT_MODEL": "qwen/qwen2.5-vl-32b-instruct:free"
}
}
}
}Option 2: Using uv (Python Package Manager)
{
"mcpServers": {
"openrouter": {
"command": "uv",
"args": [
"run",
"-m",
"openrouter_mcp_multimodal"
],
"env": {
"OPENROUTER_API_KEY": "your-api-key-here",
"DEFAULT_MODEL": "qwen/qwen2.5-vl-32b-instruct:free"
}
}
}
}Option 3: Using Docker
{
"mcpServers": {
"openrouter": {
"command": "docker",
"args": [
"run",
"--rm",
"-i",
"-e", "OPENROUTER_API_KEY=your-api-key-here",
"-e", "DEFAULT_MODEL=qwen/qwen2.5-vl-32b-instruct:free",
"stabgandocker/openrouter-mcp-multimodal:latest"
]
}
}
}Option 4: Using Smithery (recommended)
{
"mcpServers": {
"openrouter": {
"command": "smithery",
"args": [
"run",
"stabgan/openrouter-mcp-multimodal"
],
"env": {
"OPENROUTER_API_KEY": "your-api-key-here",
"DEFAULT_MODEL": "qwen/qwen2.5-vl-32b-instruct:free"
}
}
}
}If you encounter image processing failures, the server includes fallback options and better diagnostics to help you identify issues quickly. Ensure your API key is valid and that the DEFAULT_MODEL string matches a available model in your OpenRouter account.
Keep your API key secure. Do not expose OPENROUTER_API_KEY in client-side code or logs. Use environment management to keep keys out of source files. When deploying in production, prefer Docker or Smithery workflows to isolate credentials.
Send text or multimodal messages to OpenRouter models, with optional temperature control and model selection. Includes support for image inputs in multimodal conversations.