Provides an MCP Chat CLI server with document retrieval, command prompts, and MCP-based extensibility for interactive AI workflows.
Configuration
View docs{
"mcpServers": {
"abdullah-1121-mcp-2": {
"command": "uv",
"args": [
"run",
"uvicorn",
"mcp_server:mcp_app",
"--reload"
],
"env": {
"LLM_MODEL": "gemini-2.0-flash",
"LLM_API_KEY": "YOUR_GEMINI_API_KEY",
"LLM_CHAT_COMPLETION_URL": "https://generativelanguage.googleapis.com/v1beta/openai/"
}
}
}
}MCP Chat is a command-line interface that lets you interact with a language model, retrieve documents, and extend capabilities through the MCP architecture. It is useful for building conversational workflows that combine expert data access with flexible command-driven prompts.
Start the MCP Chat server locally and interact with it through the command line. The server powers document retrieval, prompt-based commands, and extensible MCP features.
Basic interaction: type your message and press Enter to chat with the model. To include content from a document, prefix the document ID with an at symbol. For example, to include the contents of a document named deposition.md, enter: > Tell me about @deposition.md.
Commands: execute server-defined actions by prefixing your input with /. For example, to summarize a document, enter: > /summarize deposition.md. Commands support auto-completion when you press Tab.
Prerequisites: you need Python 3.9 or later and a compatible LLM API key/provider (for example Gemini). Ensure your environment is ready to run Python packages and a local development server.
1) Create and configure your environment variables. Create a .env file at the project root and set the following values.
LLM_API_KEY="" # Enter your GEMINI API secret key
LLM_CHAT_COMPLETION_URL="https://generativelanguage.googleapis.com/v1beta/openai/"
LLM_MODEL="gemini-2.0-flash"Install the Python toolchain and TVM-style helper. Use the following steps to install uv, create a virtual environment, and sync dependencies.
pip install uv
```
```
uv venv
```
```
source .venv/bin/activate # On Windows: .venv\Scripts\activate
```
```
uv syncLaunch the MCP server in development mode so changes reload automatically.
uv run uvicorn mcp_server:mcp_app --reloadRun the project with ChatAgent in the CLI to begin interactive chat sessions.
uv run main.pyIf you want to inspect MCP activity, you can start the inspector tool.
npx @modelcontextprotocol/inspectorRetrieve document content by ID to include in prompts and queries.
Execute MCP-defined commands from user input to perform actions like summarization or content extraction.
Extend capabilities by integrating additional tools and endpoints via the MCP architecture.