home / mcp / mcp 2 mcp server

MCP 2 MCP Server

Provides an MCP Chat CLI server with document retrieval, command prompts, and MCP-based extensibility for interactive AI workflows.

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "abdullah-1121-mcp-2": {
      "command": "uv",
      "args": [
        "run",
        "uvicorn",
        "mcp_server:mcp_app",
        "--reload"
      ],
      "env": {
        "LLM_MODEL": "gemini-2.0-flash",
        "LLM_API_KEY": "YOUR_GEMINI_API_KEY",
        "LLM_CHAT_COMPLETION_URL": "https://generativelanguage.googleapis.com/v1beta/openai/"
      }
    }
  }
}

MCP Chat is a command-line interface that lets you interact with a language model, retrieve documents, and extend capabilities through the MCP architecture. It is useful for building conversational workflows that combine expert data access with flexible command-driven prompts.

How to use

Start the MCP Chat server locally and interact with it through the command line. The server powers document retrieval, prompt-based commands, and extensible MCP features.

Basic interaction: type your message and press Enter to chat with the model. To include content from a document, prefix the document ID with an at symbol. For example, to include the contents of a document named deposition.md, enter: > Tell me about @deposition.md.

Commands: execute server-defined actions by prefixing your input with /. For example, to summarize a document, enter: > /summarize deposition.md. Commands support auto-completion when you press Tab.

How to install

Prerequisites: you need Python 3.9 or later and a compatible LLM API key/provider (for example Gemini). Ensure your environment is ready to run Python packages and a local development server.

1) Create and configure your environment variables. Create a .env file at the project root and set the following values.

LLM_API_KEY=""  # Enter your GEMINI API secret key
LLM_CHAT_COMPLETION_URL="https://generativelanguage.googleapis.com/v1beta/openai/"
LLM_MODEL="gemini-2.0-flash"

2) Install dependencies

Install the Python toolchain and TVM-style helper. Use the following steps to install uv, create a virtual environment, and sync dependencies.

pip install uv
```
```
uv venv
```
```
source .venv/bin/activate  # On Windows: .venv\Scripts\activate
```
```
uv sync

3) Start the MCP server

Launch the MCP server in development mode so changes reload automatically.

uv run uvicorn mcp_server:mcp_app --reload

4) Run the project CLI for ChatAgent and optional inspector

Run the project with ChatAgent in the CLI to begin interactive chat sessions.

uv run main.py

5) Optional inspector

If you want to inspect MCP activity, you can start the inspector tool.

npx @modelcontextprotocol/inspector

Available tools

document_retrieval

Retrieve document content by ID to include in prompts and queries.

command_execution

Execute MCP-defined commands from user input to perform actions like summarization or content extraction.

mcp_integration

Extend capabilities by integrating additional tools and endpoints via the MCP architecture.