home / mcp / mcp-rag mcp server
Provides a retrieval-augmented generation server integrating GroundX and OpenAI for document context with MCP.
Configuration
View docs{
"mcpServers": {
"apatoliya-mcp-rag": {
"command": "mcp",
"args": [
"dev",
"server.py"
],
"env": {
"BUCKET_ID": "YOUR_BUCKET_ID",
"OPENAI_API_KEY": "YOUR_OPENAI_API_KEY",
"GROUNDX_API_KEY": "YOUR_GROUNDX_API_KEY"
}
}
}
}This MCP server provides a retrieval-augmented generation workflow that combines GroundX document retrieval with OpenAI models, all orchestrated through Modern Context Processing. You can ingest PDFs, perform semantic searches, and customize how results are generated and ranked to suit your context-heavy applications.
You will run the MCP server locally and connect to it with an MCP client. Ingest documents, then ask questions or run queries to retrieve relevant content and generate contextual responses. Use the standard search and document ingestion tools to build your knowledge base and tailor the completion behavior with a small, typed configuration.
Key workflows you can perform:
Prerequisites you need on your machine before installing and running the server.
# 1) Clone the repository
git clone <repository-url>
cd mcp-rag
# 2) Create and activate a virtual environment
uv sync
source .venv/bin/activate # On Windows, use `.venv\Scripts\activateCreate and configure your environment file to supply credentials and IDs required by the MCP server.
# 1) Copy the example environment file
cp .env.example .env
# 2) Populate your environment variables in .env
GROUNDX_API_KEY="your-groundx-api-key"
OPENAI_API_KEY="your-openai-api-key"
BUCKET_ID="your-bucket-id"Ingests a document (for example a PDF) into the MCP server and processes it for later retrieval.
Performs a search against ingested documents and returns a structured response with query, score, and result.
Configures how a search should be performed, including model selection and bucket use.