Hugging Face Spaces MCP server

Connects Claude Desktop to Hugging Face Spaces by automatically discovering and exposing Gradio endpoints as tools, enabling seamless interaction with machine learning models for text, image, and audio processing.
Back to servers
Provider
Shaun Smith
Release date
Mar 18, 2025
Language
TypeScript

This MCP server enables connections to Hugging Face Spaces with minimal setup, allowing you to use various AI models directly from Claude Desktop. By default, it connects to an image generation model, but can be configured for many other AI tasks.

Installation

Install a recent version of NodeJS for your platform, then add the following to the mcpServers section of your claude_desktop_config.json file:

"mcp=hfspace": {
  "command": "npx",
  "args": [
    "-y",
    "@llmindset/mcp-hfspace"
  ]
}

Make sure you're using Claude Desktop 0.78 or greater. This basic setup will provide you with an image generator.

Configuration Options

You can supply a list of HuggingFace spaces in the arguments. The server will find the most appropriate endpoint and automatically configure it for usage.

It's recommended to set a working directory for handling file uploads and downloads:

"mcp-hfspace": {
  "command": "npx",
  "args": [
    "-y",
    "@llmindset/mcp-hfspace",
    "--work-dir=/Users/yourusername/mcp-store",
    "shuttleai/shuttle-jaguar",
    "styletts2/styletts2",
    "Qwen/QVQ-72B-preview"
  ]
}

For private spaces, supply your Hugging Face Token with either the --hf-token=hf_... argument or HF_TOKEN environment variable.

Using the MCP Server

File Handling

By default, the server operates in Claude Desktop Mode:

  • Images are returned directly in tool responses
  • Other files are saved in the working folder with paths returned in messages
  • URLs can be supplied as inputs and their content is passed to the Space

The "Available Resources" prompt shows available files and mime types from your working directory.

Example Use Cases

Image Generation

You can use models like shuttleai/shuttle-3.1-aesthetic or FLUX.1-schnell to generate images which will be saved to the work directory and included in Claude's context window.

Vision Model (Image Analysis)

Upload an image and use spaces like merve/paligemma2-vqav2 to analyze it:

use paligemma to find out who is in "test_gemma.jpg"

You can also provide URLs:

use paligemma to detect humans in https://example.com/image.jpg

Text-to-Speech

In Claude Desktop Mode, audio files are saved in your working directory and Claude is notified of their creation.

Speech-to-Text

Use models like hf-audio/whisper-large-v3-turbo to transcribe audio files:

transcribe myaudio.mp3 using whisper

Image-to-Image

Specify a filename for tools like microsoft/OmniParser to analyze and return annotated images:

use omniparser to analyse ./screenshot.png

Chat Models

You can connect to chat models like Qwen/Qwen2.5-72B-Instruct to have Claude interact with other AI models.

Specifying API Endpoints

You can specify a specific API endpoint by adding it to the space name:

Qwen/Qwen2.5-72B-Instruct/model_chat

Recommended Spaces

Image Generation

  • shuttleai/shuttle-3.1-aesthetic
  • black-forest-labs/FLUX.1-schnell
  • yanze/PuLID-FLUX
  • Inspyrenet-Rembg (Background Removal)
  • diyism/Datou1111-shou_xin

Chat Models

  • Qwen/Qwen2.5-72B-Instruct
  • prithivMLmods/Mistral-7B-Instruct-v0.3

Text-to-Speech

  • fantaxy/Sound-AI-SFX
  • parler-tts/parler_tts

Speech-to-Text

  • hf-audio/whisper-large-v3-turbo

Text-to-Music

  • haoheliu/audioldm2-text2audio-text2music

Vision Tasks

  • microsoft/OmniParser
  • merve/paligemma2-vqav2
  • merve/paligemma-doc
  • DawnC/PawMatchAI

Troubleshooting

Common Issues

  • Endpoints with unnamed parameters are unsupported
  • Claude Desktop 0.75 may time out instead of responding to errors
  • Claude Desktop has a hard timeout of 60s which may affect large jobs
  • ZeroGPU quotas on HuggingFace may lead to timeouts

If something stops working suddenly, you may have exhausted your HuggingFace ZeroGPU quota. Try again after a short period or set up your own Space.

For persistent issues, use the @modelcontextprotocol/inspector to diagnose problems.

How to add this MCP server to Cursor

There are two ways to add an MCP server to Cursor. The most common way is to add the server globally in the ~/.cursor/mcp.json file so that it is available in all of your projects.

If you only need the server in a single project, you can add it to the project instead by creating or adding it to the .cursor/mcp.json file.

Adding an MCP server to Cursor globally

To add a global MCP server go to Cursor Settings > MCP and click "Add new global MCP server".

When you click that button the ~/.cursor/mcp.json file will be opened and you can add your server like this:

{
    "mcpServers": {
        "cursor-rules-mcp": {
            "command": "npx",
            "args": [
                "-y",
                "cursor-rules-mcp"
            ]
        }
    }
}

Adding an MCP server to a project

To add an MCP server to a project you can create a new .cursor/mcp.json file or add it to the existing one. This will look exactly the same as the global MCP server example above.

How to use the MCP server

Once the server is installed, you might need to head back to Settings > MCP and click the refresh button.

The Cursor agent will then be able to see the available tools the added MCP server has available and will call them when it needs to.

You can also explictly ask the agent to use the tool by mentioning the tool name and describing what the function does.

Want to 10x your AI skills?

Get a free account and learn to code + market your apps using AI (with or without vibes!).

Nah, maybe later