This MCP server enables connections to Hugging Face Spaces with minimal setup, allowing you to use various AI models directly from Claude Desktop. By default, it connects to an image generation model, but can be configured for many other AI tasks.
Install a recent version of NodeJS for your platform, then add the following to the mcpServers
section of your claude_desktop_config.json
file:
"mcp=hfspace": {
"command": "npx",
"args": [
"-y",
"@llmindset/mcp-hfspace"
]
}
Make sure you're using Claude Desktop 0.78 or greater. This basic setup will provide you with an image generator.
You can supply a list of HuggingFace spaces in the arguments. The server will find the most appropriate endpoint and automatically configure it for usage.
It's recommended to set a working directory for handling file uploads and downloads:
"mcp-hfspace": {
"command": "npx",
"args": [
"-y",
"@llmindset/mcp-hfspace",
"--work-dir=/Users/yourusername/mcp-store",
"shuttleai/shuttle-jaguar",
"styletts2/styletts2",
"Qwen/QVQ-72B-preview"
]
}
For private spaces, supply your Hugging Face Token with either the --hf-token=hf_...
argument or HF_TOKEN
environment variable.
By default, the server operates in Claude Desktop Mode:
The "Available Resources" prompt shows available files and mime types from your working directory.
You can use models like shuttleai/shuttle-3.1-aesthetic
or FLUX.1-schnell
to generate images which will be saved to the work directory and included in Claude's context window.
Upload an image and use spaces like merve/paligemma2-vqav2
to analyze it:
use paligemma to find out who is in "test_gemma.jpg"
You can also provide URLs:
use paligemma to detect humans in https://example.com/image.jpg
In Claude Desktop Mode, audio files are saved in your working directory and Claude is notified of their creation.
Use models like hf-audio/whisper-large-v3-turbo
to transcribe audio files:
transcribe myaudio.mp3 using whisper
Specify a filename for tools like microsoft/OmniParser
to analyze and return annotated images:
use omniparser to analyse ./screenshot.png
You can connect to chat models like Qwen/Qwen2.5-72B-Instruct
to have Claude interact with other AI models.
You can specify a specific API endpoint by adding it to the space name:
Qwen/Qwen2.5-72B-Instruct/model_chat
If something stops working suddenly, you may have exhausted your HuggingFace ZeroGPU quota. Try again after a short period or set up your own Space.
For persistent issues, use the @modelcontextprotocol/inspector to diagnose problems.
There are two ways to add an MCP server to Cursor. The most common way is to add the server globally in the ~/.cursor/mcp.json
file so that it is available in all of your projects.
If you only need the server in a single project, you can add it to the project instead by creating or adding it to the .cursor/mcp.json
file.
To add a global MCP server go to Cursor Settings > MCP and click "Add new global MCP server".
When you click that button the ~/.cursor/mcp.json
file will be opened and you can add your server like this:
{
"mcpServers": {
"cursor-rules-mcp": {
"command": "npx",
"args": [
"-y",
"cursor-rules-mcp"
]
}
}
}
To add an MCP server to a project you can create a new .cursor/mcp.json
file or add it to the existing one. This will look exactly the same as the global MCP server example above.
Once the server is installed, you might need to head back to Settings > MCP and click the refresh button.
The Cursor agent will then be able to see the available tools the added MCP server has available and will call them when it needs to.
You can also explictly ask the agent to use the tool by mentioning the tool name and describing what the function does.