LLMS.txt Documentation MCP server

Provides AI systems with access to documentation from llms.txt files by fetching and parsing content from specified URLs, enabling seamless documentation lookup during coding sessions.
Back to servers
Provider
LangChain
Release date
Mar 18, 2025
Language
Python
Package
Stats
11.3K downloads
392 stars

MCP LLMS-TXT Documentation Server is an open-source server that gives developers full control over tools used by MCP host applications (such as Cursor, Windsurf, and Claude Code/Desktop). It provides these applications with a user-defined list of llms.txt files and a tool to read URLs within those files, enabling complete auditing of tool calls and returned context.

Installation

Installing the Server

First, you need to install the uv package manager:

curl -LsSf https://astral.sh/uv/install.sh | sh

For alternative installation methods, refer to the official uv documentation.

Basic Usage

Running the Server Locally

To test the MCP server locally with your chosen llms.txt files:

uvx --from mcpdoc mcpdoc \
    --urls "LangGraph:https://langchain-ai.github.io/langgraph/llms.txt" "LangChain:https://python.langchain.com/llms.txt" \
    --transport sse \
    --port 8082 \
    --host localhost

This will run the server at http://localhost:8082

Testing with MCP Inspector

You can test tool calls using the MCP inspector:

npx @modelcontextprotocol/inspector

Connecting to IDEs and Applications

Connecting to Cursor

  1. Open Cursor Settings and navigate to the MCP tab (this will open ~/.cursor/mcp.json)
  2. Paste the following configuration:
{
  "mcpServers": {
    "langgraph-docs-mcp": {
      "command": "uvx",
      "args": [
        "--from",
        "mcpdoc",
        "mcpdoc",
        "--urls",
        "LangGraph:https://langchain-ai.github.io/langgraph/llms.txt LangChain:https://python.langchain.com/llms.txt",
        "--transport",
        "stdio"
      ]
    }
  }
}
  1. Update Cursor Global (User) Rules with the following:
for ANY question about LangGraph, use the langgraph-docs-mcp server to help answer -- 
+ call list_doc_sources tool to get the available llms.txt file
+ call fetch_docs tool to read it
+ reflect on the urls in llms.txt 
+ reflect on the input question 
+ call fetch_docs on any urls relevant to the question
+ use this to answer the question
  1. Press CMD+L (on Mac) to open chat and ensure "agent" is selected
  2. Try a sample prompt like "what are types of memory in LangGraph?"

Connecting to Windsurf

  1. Open Cascade with CMD+L (on Mac)
  2. Click "Configure MCP" to open ~/.codeium/windsurf/mcp_config.json
  3. Update with the same "langgraph-docs-mcp" configuration used for Cursor
  4. Update Windsurf Rules/Global rules with similar rules to those used for Cursor
  5. Try the same example prompt to test the functionality

Connecting to Claude Desktop

  1. Open Settings/Developer to update ~/Library/Application\ Support/Claude/claude_desktop_config.json
  2. Update with the "langgraph-docs-mcp" configuration as used previously
  3. Restart Claude Desktop app
  4. Append the following to your prompt (as Claude Desktop currently doesn't support global rules):
<rules>
for ANY question about LangGraph, use the langgraph-docs-mcp server to help answer -- 
+ call list_doc_sources tool to get the available llms.txt file
+ call fetch_docs tool to read it
+ reflect on the urls in llms.txt 
+ reflect on the input question 
+ call fetch_docs on any urls relevant to the question
</rules>
  1. Try the example prompt and approve tool calls when prompted

Connecting to Claude Code

  1. After installing Claude Code, run this command to add the MCP server:
claude mcp add-json langgraph-docs '{"type":"stdio","command":"uvx" ,"args":["--from", "mcpdoc", "mcpdoc", "--urls", "langgraph:https://langchain-ai.github.io/langgraph/llms.txt", "--urls", "LangChain:https://python.langchain.com/llms.txt"]}' -s local
  1. Launch Claude Code and run /mcp to view your tools
  2. Similar to Claude Desktop, append the rules to your prompt
  3. Try the example prompt and approve tool calls when prompted

Command-line Interface Options

You can specify documentation sources in three ways:

Using a YAML Config File

mcpdoc --yaml sample_config.yaml

Using a JSON Config File

mcpdoc --json sample_config.json

Directly Specifying URLs

mcpdoc --urls LangGraph:https://langchain-ai.github.io/langgraph/llms.txt --urls LangChain:https://python.langchain.com/llms.txt

You can combine these methods:

mcpdoc --yaml sample_config.yaml --json sample_config.json --urls LangGraph:https://langchain-ai.github.io/langgraph/llms.txt

Additional Options

mcpdoc --yaml sample_config.yaml --follow-redirects --timeout 15

Security and Domain Access Control

For security reasons, mcpdoc implements strict domain access controls:

  1. When you specify a remote llms.txt URL, mcpdoc automatically adds only that specific domain to the allowed domains list.
  2. When using a local file, NO domains are automatically added to the allowed list. You MUST explicitly specify which domains to allow using the --allowed-domains parameter.
  3. To allow fetching from additional domains:
    • Use --allowed-domains domain1.com domain2.com to add specific domains
    • Use --allowed-domains '*' to allow all domains (use with caution)

Configuration Format Examples

YAML Configuration (sample_config.yaml)

# Each entry must have a llms_txt URL and optionally a name
- name: LangGraph Python
  llms_txt: https://langchain-ai.github.io/langgraph/llms.txt

JSON Configuration (sample_config.json)

[
  {
    "name": "LangGraph Python",
    "llms_txt": "https://langchain-ai.github.io/langgraph/llms.txt"
  }
]

Programmatic Usage

from mcpdoc.main import create_server

# Create a server with documentation sources
server = create_server(
    [
        {
            "name": "LangGraph Python",
            "llms_txt": "https://langchain-ai.github.io/langgraph/llms.txt",
        },
    ],
    follow_redirects=True,
    timeout=15.0,
)

# Run the server
server.run(transport="stdio")

How to add this MCP server to Cursor

There are two ways to add an MCP server to Cursor. The most common way is to add the server globally in the ~/.cursor/mcp.json file so that it is available in all of your projects.

If you only need the server in a single project, you can add it to the project instead by creating or adding it to the .cursor/mcp.json file.

Adding an MCP server to Cursor globally

To add a global MCP server go to Cursor Settings > MCP and click "Add new global MCP server".

When you click that button the ~/.cursor/mcp.json file will be opened and you can add your server like this:

{
    "mcpServers": {
        "cursor-rules-mcp": {
            "command": "npx",
            "args": [
                "-y",
                "cursor-rules-mcp"
            ]
        }
    }
}

Adding an MCP server to a project

To add an MCP server to a project you can create a new .cursor/mcp.json file or add it to the existing one. This will look exactly the same as the global MCP server example above.

How to use the MCP server

Once the server is installed, you might need to head back to Settings > MCP and click the refresh button.

The Cursor agent will then be able to see the available tools the added MCP server has available and will call them when it needs to.

You can also explictly ask the agent to use the tool by mentioning the tool name and describing what the function does.

Want to 10x your AI skills?

Get a free account and learn to code + market your apps using AI (with or without vibes!).

Nah, maybe later