home / mcp / ragdocs mcp server
Provides semantic search over documentation sources via Qdrant and embeddings.
Configuration
View docs{
"mcpServers": {
"qpd-v-mcp-ragdocs": {
"command": "node",
"args": [
"C:/Users/YOUR_USERNAME/AppData/Roaming/npm/node_modules/@qpd-v/mcp-server-ragdocs/build/index.js"
],
"env": {
"OLLAMA_URL": "http://localhost:11434",
"QDRANT_URL": "http://127.0.0.1:6333",
"OPENAI_API_KEY": "your-openai-api-key",
"QDRANT_API_KEY": "your-qdrant-api-key",
"EMBEDDING_PROVIDER": "ollama"
}
}
}
}Ragdocs MCP Server lets you add documentation from URLs or local files and query it with natural language. It stores content in a vector database for fast semantic search and retrieval, making it easy to explore documentation sources.
You can interact with Ragdocs through any MCP-compatible client. Use the following practical patterns to get started:
- Add documentation sources by providing their URLs or local file paths. Ragdocs will ingest the content and make it searchable.
- Search with natural language to find relevant sections, summaries, or explanations across all stored sources.
- List all connected documentation sources to see what has been indexed and what remains to be added.
# 1) Install the Ragdocs MCP server globally
npm install -g @qpd-v/mcp-server-ragdocs
# 2) Start Qdrant (Docker) locally if you plan to run it yourself
docker run -p 6333:6333 -p 6334:6334 qdrant/qdrant
# 3) Ensure Ollama is running with the embedding model
ollama pull nomic-embed-text
# 4) Add the Ragdocs MCP server to your MCP client configurationIf you are configuring Ragdocs in a client like Cline, Roo-Code, or Claude Desktop, you typically add a stdio MCP entry that executes the built server script. The command and environment variables shown below reflect the official setup examples.
Configuration and run steps are shown in the sample snippets below. The runtime command starts the server directly via Node.js and points to the built index.js. Ensure you replace YOUR_USERNAME with your actual user name when adapting paths.
- Use a local Qdrant instance (via Docker) or Qdrant Cloud, and point Ragdocs to its URL with QDRANT_URL. For local setups, this is typically http://127.0.0.1:6333.
- Choose an embedding provider. Ollama is the default and free, while OpenAI can be used with an API key. When using OpenAI, supply OPENAI_API_KEY.
- Ragdocs can be configured in client environments such as Cline, Roo-Code, or Claude Desktop. The stdio configuration runs the Node.js server from the installed npm module path.
Common issues include Qdrant connection failures and missing embedding models. If you encounter a Qdrant connection error, verify that Docker is running and that the Qdrant container is active. For missing embeddings, ensure you have run the model pull command for Ollama (nomic-embed-text) or configure OpenAI as your embedding provider with a valid API key.
If you adjust configuration paths, replace placeholder usernames with your actual user names and ensure the referenced files exist. For issues with npm global installs, confirm that npm is in your PATH and try running with admin privileges.
- Ragdocs uses a stdio MCP server configuration that launches via node and the built script. The example runs the module from the global npm directory.
- The following environment variables control core behavior: - QDRANT_URL: URL of your Qdrant instance - EMBEDDING_PROVIDER: 'ollama' or 'openai' - OLLAMA_URL: URL to your Ollama instance (default http://localhost:11434) - OPENAI_API_KEY: API key for OpenAI embeddings (required if using OpenAI)
Choose between a local Qdrant setup or Qdrant Cloud, and select an embedding provider. Ollama is recommended for local, cost-free embeddings, while OpenAI can be used with a paid API key.
Add documentation from a URL to the RAG database
Search through stored documentation using a natural language query
List all documentation sources currently stored