Hugging Face MCP server

Provides direct access to thousands of Hugging Face models and resources with optional authentication, enabling natural language processing and image generation capabilities within conversation interfaces.
Back to servers
Provider
Shreyas Karnik
Release date
Mar 20, 2025
Language
Python
Stats
41 stars

This MCP server provides read-only access to Hugging Face Hub APIs, allowing Claude and other LLMs to interact with Hugging Face's models, datasets, spaces, papers, and collections through a structured protocol interface.

Installation

Via Smithery (Recommended)

The easiest way to install the Hugging Face MCP server for Claude Desktop is through Smithery:

npx -y @smithery/cli install @shreyaskarnik/huggingface-mcp-server --client claude

Manual Configuration

You can also manually configure the server in Claude Desktop:

On MacOS: Edit the configuration file at: ~/Library/Application\ Support/Claude/claude_desktop_config.json

On Windows: Edit the configuration file at: %APPDATA%/Claude/claude_desktop_config.json

Add the following to your configuration file:

"mcpServers": {
  "huggingface": {
    "command": "uv",
    "args": [
      "--directory",
      "/absolute/path/to/huggingface-mcp-server",
      "run",
      "huggingface_mcp_server.py"
    ],
    "env": {
      "HF_TOKEN": "your_token_here"  // Optional
    }
  }
}

Configuration

The server works without additional configuration, but you can enhance its capabilities:

Authentication (Optional)

Setting the HF_TOKEN environment variable with your Hugging Face API token provides:

  • Higher API rate limits
  • Access to private repositories (if authorized)
  • Improved reliability for high-volume requests

Server Capabilities

Available Resources

The server exposes Hugging Face resources through custom URIs:

  • Models: hf://model/{model_id}
  • Datasets: hf://dataset/{dataset_id}
  • Spaces: hf://space/{space_id}

Prompt Templates

The server provides two specialized prompt templates:

Compare Models

Generates comparisons between multiple Hugging Face models:

  • Requires model_ids argument (comma-separated)
  • Example: "Compare the Llama-3-8B and Mistral-7B models"

Summarize Paper

Summarizes research papers from Hugging Face:

  • Requires arxiv_id argument
  • Optional detail_level argument (brief/detailed)
  • Example: "Summarize the paper with arXiv ID 2307.09288"

Available Tools

Model Tools

  • search-models: Search with filters for query, author, tags, and limit
  • get-model-info: Get detailed information about a specific model

Dataset Tools

  • search-datasets: Search datasets with filters
  • get-dataset-info: Get detailed information about a specific dataset

Space Tools

  • search-spaces: Search Spaces with filters including SDK type
  • get-space-info: Get detailed information about a specific Space

Paper Tools

  • get-paper-info: Get information about a paper and its implementations
  • get-daily-papers: Get the list of curated daily papers

Collection Tools

  • search-collections: Search collections with various filters
  • get-collection-info: Get detailed information about a specific collection

Example Usage

When using the server with Claude, try these example prompts:

  • "Search for BERT models on Hugging Face with less than 100 million parameters"
  • "Find the most popular datasets for text classification on Hugging Face"
  • "What are today's featured AI research papers on Hugging Face?"
  • "Compare the Llama-3-8B and Mistral-7B models from Hugging Face"
  • "Show me the most popular Gradio spaces for image generation"
  • "Find collections created by TheBloke that include Mixtral models"

Troubleshooting

If you encounter issues:

  1. Check server logs:

    • macOS: ~/Library/Logs/Claude/mcp-server-huggingface.log
    • Windows: %APPDATA%\Claude\logs\mcp-server-huggingface.log
  2. API rate limiting: Consider adding a Hugging Face API token

  3. Connectivity: Ensure your machine has internet access to reach the Hugging Face API

  4. Verify data: If a tool fails, check if the data exists on the Hugging Face website

How to add this MCP server to Cursor

There are two ways to add an MCP server to Cursor. The most common way is to add the server globally in the ~/.cursor/mcp.json file so that it is available in all of your projects.

If you only need the server in a single project, you can add it to the project instead by creating or adding it to the .cursor/mcp.json file.

Adding an MCP server to Cursor globally

To add a global MCP server go to Cursor Settings > MCP and click "Add new global MCP server".

When you click that button the ~/.cursor/mcp.json file will be opened and you can add your server like this:

{
    "mcpServers": {
        "cursor-rules-mcp": {
            "command": "npx",
            "args": [
                "-y",
                "cursor-rules-mcp"
            ]
        }
    }
}

Adding an MCP server to a project

To add an MCP server to a project you can create a new .cursor/mcp.json file or add it to the existing one. This will look exactly the same as the global MCP server example above.

How to use the MCP server

Once the server is installed, you might need to head back to Settings > MCP and click the refresh button.

The Cursor agent will then be able to see the available tools the added MCP server has available and will call them when it needs to.

You can also explictly ask the agent to use the tool by mentioning the tool name and describing what the function does.

Want to 10x your AI skills?

Get a free account and learn to code + market your apps using AI (with or without vibes!).

Nah, maybe later