home / mcp / hf mcp server

HF MCP Server

Hugging Face MCP Server

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "huggingface-hf-mcp-server": {
      "url": "https://huggingface.co/mcp",
      "headers": {
        "HF_TOKEN": "<YOUR_HF_TOKEN>",
        "TRANSPORT": "stdio",
        "HF_API_TIMEOUT": "12500",
        "PROXY_TOOLS_CSV": "https://example.com/tools.csv",
        "DEFAULT_HF_TOKEN": "hf_xxx",
        "AUTHENTICATE_TOOL": "true",
        "MCP_STRICT_COMPLIANCE": "true",
        "GRADIO_SKIP_INITIALIZE": "true"
      }
    }
  }
}

You can connect your local or cloud-based clients to the Hugging Face MCP Server to expose endpoints from the Hugging Face Hub, Gradio apps, and related tools. This server provides multiple transports (HTTP via a remote MCP endpoint and STDIO for local usage) to enable flexible integration with your workflows.

How to use

You use an MCP client to connect to the Hugging Face MCP Server. The server supports HTTP-based MCP endpoints and local STDIO operation. For HTTP usage, configure the client to communicate with the remote endpoint and authenticate with your Hugging Face token. For STDIO usage, run the MCP server locally and interact with it via standard input and output.

How to install

Prerequisites: you need Node.js with npm or pnpm installed, or you can run the server via Docker. The README describes how to start in STDIO mode, as well as HTTP-related modes. Follow these concrete steps to set up the server in your environment.

# Quick start: Run the MCP server in STDIO mode locally
npx @llmindset/hf-mcp-server       # Start in STDIO mode

# Alternatively, run the HTTP-enabled mode (Streamable HTTP)
npx @llmindset/hf-mcp-server-http  # Start in Streamable HTTP mode
npx @llmindset/hf-mcp-server-json  # Start in Streamable HTTP (JSON RPC) mode

# If you prefer Docker
docker pull ghcr.io/evalstate/hf-mcp-server:latest
# Default: Streamable HTTP mode on port 3000
docker run --rm -p 3000:3000 ghcr.io/evalstate/hf-mcp-server:latest

# For an SDIO-based run with explicit tokens you can set environments as needed
# Example: run with a token for HTTP transport in a host environment
# (Note: actual token value should be securely provided at runtime)
docker run --rm -p 3000:3000 \
  -e DEFAULT_HF_TOKEN=hf_xxx \
  ghcr.io/evalstate/hf-mcp-server:latest

Additional sections

Configuration and startup differences are described below to help you tailor the server to your environment. You can also configure your Tools and Spaces on the Hugging Face hub after installation.

Configuration and transport details

The server supports multiple transports. The HTTP transport exposes an endpoint under /mcp, while STDIO uses the process’s standard input and output. The default web UI runs on port 3000, and the Streamable HTTP service is available at /mcp.

Environment variables influence behavior such as transport selection, token handling, and session management. You can adjust how the server authenticates and communicates with the Hugging Face API.

Proxy tools can be loaded via a CSV at startup to register additional Streamable HTTP endpoints. This lets you integrate external MCP endpoints as proxy sources.

Security and tokens

When using the HTTP transport, you typically provide a Bearer token for authorization. If a default token is set for development or testing, it will be used when no token is supplied by the client. Never share your token in public code or logs.

Notes and tips

Tip: Add ?no_image_content=true to the Hugging Face MCP URL to remove ImageContent blocks from Gradio servers.

Troubleshooting

If you encounter connectivity issues, verify that the correct transport is configured (stdio vs. http) and that the token provided in the client headers matches what the server expects. Ensure the server port (default 3000) is accessible from your client environment.

Available tools

authenticate

Provide an OAuth challenge to authorize tool usage when invoked as a tool.

hf_doc_search

Search Hugging Face documentation for relevant content.

hf_doc_fetch

Fetch documentation content from Hugging Face sources for display.