home / mcp / mcp documentation server
Provides local-first document management and AI-assisted search via embeddings, with on-disk persistence and in-memory indexes.
Configuration
View docs{
"mcpServers": {
"andrea9293-mcp-documentation-server": {
"command": "npx",
"args": [
"-y",
"@andrea9293/mcp-documentation-server"
],
"env": {
"MCP_BASE_DIR": "${PATH_TO_WORKSPACE}",
"GEMINI_API_KEY": "YOUR_API_KEY",
"MCP_CACHE_SIZE": "1000",
"MCP_MAX_WORKERS": "4",
"MCP_EMBEDDING_MODEL": "Xenova/all-MiniLM-L6-v2",
"MCP_INDEXING_ENABLED": "true",
"MCP_PARALLEL_ENABLED": "true",
"MCP_STREAMING_ENABLED": "true",
"MCP_STREAM_CHUNK_SIZE": "65536",
"MCP_STREAM_FILE_SIZE_LIMIT": "10485760"
}
}
}
}You can run and use an MCP Documentation Server to manage your documents locally with fast lookup, AI-assisted search, and seamless embedding-based retrieval. It stores data on your machine, supports large uploads efficiently, and can be extended with Google Gemini AI for advanced analysis and summaries.
You configure a client to connect to the MCP Documentation Server using a local, standard MCP workflow. Install the server tooling in your workspace and then connect a client to run operations such as adding documents, processing uploads, and performing both traditional semantic search and AI-powered searches.
Prerequisites: ensure you have Node.js installed on your system. You typically need a modern Node.js environment to run MCP tools and dependencies.
1. Clone the project repository to your local machine.
2. Open a terminal in the project directory.
3. Install dependencies.
4. Build the project.
5. Start using the MCP client with the provided local server command. The recommended approach is to run the MCP server via the following command in a terminal or via your MCP client configuration.
# Step 1: Clone the project
git clone https://github.com/andrea9293/mcp-documentation-server.git
cd mcp-documentation-server
# Step 2: Install dependencies
npm install
# Step 3: Build (if a build step exists)
npm run build
# Step 4: Run the MCP server locally via the client-friendly command
npx -y @andrea9293/mcp-documentation-serverConfigure behavior through environment variables in your shell or a .env file. Key options include choosing the base data directory, enabling AI-powered search, and tuning performance parameters.
The default data directory is where your documents, chunks, and uploads are stored locally. You can set a custom base directory to isolate workspaces.
Add a document with title, content, and metadata to the MCP document store.
List stored documents along with their metadata.
Retrieve a full document by its identifier.
Remove a document, its chunks, and associated original files from storage.
Process files in the uploads folder: convert to documents, chunk, embed, and back up originals.
Return the absolute path to the uploads folder.
List files currently present in the uploads folder.
AI-powered search using Gemini for advanced analysis (requires GEMINI_API_KEY).
Semantic search over document chunks with ranked hits.
Fetch neighboring chunks around a target chunk index to enrich LLM prompts.