home / mcp / agentic mcp server

Agentic MCP Server

Agentic RAG with MCP Server

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "ashishpatel26-agentic-rag-with-mcp-server": {
      "command": "python",
      "args": [
        "server.py"
      ],
      "env": {
        "GEMINI_API_KEY": "your-model-name-here",
        "OPENAI_MODEL_NAME": "your-model-name-here"
      }
    }
  }
}

You run an MCP server that exposes automated tools for Retrieval-Augmented Generation workflows. This server hosts entity extraction, query refinement, time retrieval, and relevance checking, which you can combine with a client to build Agentic RAG applications that reason over documents more effectively.

How to use

Start by launching the MCP server so your client can discover and call its tools. You then run a client session, list available tools, and call the tools with your own queries and data. The server coordinates with your preferred language models (OpenAI or Gemini) to extract entities, refine queries, and verify content relevance, enabling more accurate and context-aware retrieval.

How to install

Prerequisites: Python 3.9 or higher, an OpenAI key if you use OpenAI models, and network access.

# Step 1: Clone the repository
git clone https://github.com/ashishpatel26/Agentic-RAG-with-MCP-Server.git

# Step 2: Navigate into the project directory
cd Agentic-RAG-with-MCP-Serve

# Step 3: Install Python dependencies
pip install -r requirements.txt

Additional notes

Configuration relies on environment variables defined in a .env file. You will set your OpenAI model name and Gemini API key in this file to enable the corresponding tools.

# Example .env entries
OPENAI_MODEL_NAME="your-model-name-here"
GEMINI_API_KEY="your-model-name-here"

Available tools

get_time_with_prefix

Returns the current date and time, useful for timestamping results.

extract_entities_tool

Uses OpenAI to extract entities from a query to improve document retrieval relevance.

refine_query_tool

Improves the quality of user queries with OpenAI-powered refinement.

check_relevance

Filters out irrelevant content by checking chunk relevance with an LLM.