The MCP Documentation Server is an open-source tool that provides AI coding assistants like Cursor, Windsurf, and Claude with controlled access to documentation through the Model Context Protocol (MCP). It allows you to serve llms.txt files with full control over document retrieval and context, letting you audit what information is accessed by your AI assistants.
First, install the uv package manager:
curl -LsSf https://astral.sh/uv/install.sh | sh
Install the MCP server using uv:
uvx --from mcpdoc mcpdoc
Launch the MCP server with one or more llms.txt URLs:
uvx --from mcpdoc mcpdoc \
--urls "LangGraph:https://langchain-ai.github.io/langgraph/llms.txt" "LangChain:https://python.langchain.com/llms.txt" \
--transport sse \
--port 8082 \
--host localhost
This starts the server at http://localhost:8082 with documentation for LangGraph and LangChain.
The server implements strict domain access controls:
--allowed-domains domain1.com domain2.com--allowed-domains '*' to allow all domains (use with caution)You can test the server using the MCP inspector tool:
npx @modelcontextprotocol/inspector
Connect to your running server to test tool calls.
~/.cursor/mcp.json file with:{
"mcpServers": {
"langgraph-docs-mcp": {
"command": "uvx",
"args": [
"--from",
"mcpdoc",
"mcpdoc",
"--urls",
"LangGraph:https://langchain-ai.github.io/langgraph/llms.txt LangChain:https://python.langchain.com/llms.txt",
"--transport",
"stdio"
]
}
}
}
for ANY question about LangGraph, use the langgraph-docs-mcp server to help answer --
+ call list_doc_sources tool to get the available llms.txt file
+ call fetch_docs tool to read it
+ reflect on the urls in llms.txt
+ reflect on the input question
+ call fetch_docs on any urls relevant to the question
+ use this to answer the question
~/.codeium/windsurf/mcp_config.json file with the same configuration as for Cursor~/Library/Application\ Support/Claude/claude_desktop_config.json<rules>
for ANY question about LangGraph, use the langgraph-docs-mcp server to help answer --
+ call list_doc_sources tool to get the available llms.txt file
+ call fetch_docs tool to read it
+ reflect on the urls in llms.txt
+ reflect on the input question
+ call fetch_docs on any urls relevant to the question
</rules>
In a terminal, run:
claude mcp add-json langgraph-docs '{"type":"stdio","command":"uvx" ,"args":["--from", "mcpdoc", "mcpdoc", "--urls", "langgraph:https://langchain-ai.github.io/langgraph/llms.txt", "LangChain:https://python.langchain.com/llms.txt"]}' -s local
Launch Claude Code and test with:
$ Claude
$ /mcp
Include the rules in your prompt as with Claude Desktop.
The server can be configured using various options:
mcpdoc --urls LangGraph:https://langchain-ai.github.io/langgraph/llms.txt --follow-redirects --timeout 15
You can specify documentation sources in three ways:
mcpdoc --yaml sample_config.yaml
mcpdoc --json sample_config.json
mcpdoc --urls LangGraph:https://langchain-ai.github.io/langgraph/llms.txt
These methods can be combined to merge documentation sources.
- name: LangGraph Python
llms_txt: https://langchain-ai.github.io/langgraph/llms.txt
[
{
"name": "LangGraph Python",
"llms_txt": "https://langchain-ai.github.io/langgraph/llms.txt"
}
]
You can also use the server programmatically in Python:
from mcpdoc.main import create_server
server = create_server(
[
{
"name": "LangGraph Python",
"llms_txt": "https://langchain-ai.github.io/langgraph/llms.txt",
},
],
follow_redirects=True,
timeout=15.0,
)
server.run(transport="stdio")
To add this MCP server to Claude Code, run this command in your terminal:
claude mcp add-json "langgraph-docs-mcp" '{"command":"uvx","args":["--from","mcpdoc","mcpdoc","--urls","LangGraph:https://langchain-ai.github.io/langgraph/llms.txt LangChain:https://python.langchain.com/llms.txt","--transport","stdio"]}'
See the official Claude Code MCP documentation for more details.
There are two ways to add an MCP server to Cursor. The most common way is to add the server globally in the ~/.cursor/mcp.json file so that it is available in all of your projects.
If you only need the server in a single project, you can add it to the project instead by creating or adding it to the .cursor/mcp.json file.
To add a global MCP server go to Cursor Settings > Tools & Integrations and click "New MCP Server".
When you click that button the ~/.cursor/mcp.json file will be opened and you can add your server like this:
{
"mcpServers": {
"langgraph-docs-mcp": {
"command": "uvx",
"args": [
"--from",
"mcpdoc",
"mcpdoc",
"--urls",
"LangGraph:https://langchain-ai.github.io/langgraph/llms.txt LangChain:https://python.langchain.com/llms.txt",
"--transport",
"stdio"
]
}
}
}
To add an MCP server to a project you can create a new .cursor/mcp.json file or add it to the existing one. This will look exactly the same as the global MCP server example above.
Once the server is installed, you might need to head back to Settings > MCP and click the refresh button.
The Cursor agent will then be able to see the available tools the added MCP server has available and will call them when it needs to.
You can also explicitly ask the agent to use the tool by mentioning the tool name and describing what the function does.
To add this MCP server to Claude Desktop:
1. Find your configuration file:
~/Library/Application Support/Claude/claude_desktop_config.json%APPDATA%\Claude\claude_desktop_config.json~/.config/Claude/claude_desktop_config.json2. Add this to your configuration file:
{
"mcpServers": {
"langgraph-docs-mcp": {
"command": "uvx",
"args": [
"--from",
"mcpdoc",
"mcpdoc",
"--urls",
"LangGraph:https://langchain-ai.github.io/langgraph/llms.txt LangChain:https://python.langchain.com/llms.txt",
"--transport",
"stdio"
]
}
}
}
3. Restart Claude Desktop for the changes to take effect