home / mcp / mcp image validator mcp server
MCP для валидации созданных изображений через другие MCP
Configuration
View docs{
"mcpServers": {
"aronmav-mcp-image-validator": {
"command": "python",
"args": [
"server.py"
],
"env": {
"VISION_MODEL": "qwen3-vl:235b-cloud",
"OLLAMA_API_KEY": "YOUR_API_KEY_HERE",
"OLLAMA_BASE_URL": "https://ollama.com/v1",
"VISION_MAX_TOKENS": "1000",
"VISION_TEMPERATURE": "0.2"
}
}
}
}You set up an MCP server that analyzes images using the Qwen3-VL vision model via Ollama Cloud. This server exposes a simple MCP endpoint you can query from clients like Claude Code to generate detailed image descriptions and insights without needing local GPU power.
Run this MCP server locally or connect via a client that speaks the MCP protocol. You will send image path prompts to describe images and receive detailed outputs powered by the Qwen3-VL model through Ollama Cloud.
Prerequisites for using this server are Python 3.10 or higher and a valid Ollama Cloud API key. Ensure you have network access to Ollama Cloud and that your key has the required permissions.
To start the server directly, use the standard runtime command shown in the guidance and then interact with it through your MCP client. The server runs as a standard MCP stdio endpoint and listens for protocol messages on stdin.
In Claude Code or another MCP client, configure the MCP connection with the provided stdio setup. You will supply the Python command and the path to the server script, along with your API key in the environment. After configuration, you can request image descriptions by asking the client to describe an image at a specific path.
# Prerequisites
# Install Python 3.10+ if needed
# Install Git to clone the project
# Clone the repository
git clone <url-to-repository>
cd mcp-image-validator
# Install Python dependencies
pip install -r requirements.txt# Copy configuration template
cp .env.example .env
# Edit the environment file to add your API key
# You can obtain an API key at Ollama Cloud settingsEdit the .env file to include your credentials and model settings exactly as shown in the sample below.
OLLAMA_API_KEY=YOUR_API_KEY_HERE
VISION_MODEL=qwen3-vl:235b-cloud
OLLAMA_BASE_URL=https://ollama.com/v1
VISION_TEMPERATURE=0.2
VISION_MAX_TOKENS=1000After configuring, run the server directly to start handling MCP messages.
python server.pyConfiguration notes: you must provide your Ollama Cloud API key and ensure the vision model name ends with -cloud (for example, qwen3-vl:235b-cloud). The server uses an Ollama Cloud client to send image analysis requests to the Qwen3-VL model and returns a detailed image description.
Testing and validation: you can run the full test script to verify connectivity to Ollama Cloud, locate a test image in the repository, and verify that the image is analyzed with a detailed description. If a test image is not found, you will be prompted to provide a path to your own image.
Integration with Claude Code: add the MCP server configuration under your editor’s MCP settings. Use an absolute path to the server script and pass your API key in the environment section. After saving, restart the editor and start describing images via the editor’s command palette or chat prompt.
Troubleshooting tips: ensure the API key is valid, check that the Ollama Cloud URL is reachable, and verify the image path you provide is absolute. If you encounter slow responses, remember that the 235B parameter model may take longer to process images due to network latency and model load.
Keep your Ollama Cloud API key secure. Do not commit keys to version control. Use environment variables to pass sensitive information to the MCP runtime and client configurations.
Performance considerations: cloud-based vision models incur network latency. For faster responses, consider using smaller models if supported, or optimize client requests to batch operations when appropriate.
Analyzes an image and provides a detailed description using Qwen3-VL