home / mcp / llm-context mcp server
Share code with LLMs via Model Context Protocol or clipboard. Rule-based customization enables easy switching between different tasks (like code review and documentation). Includes smart code outlining.
Configuration
View docs{
"mcpServers": {
"cyberchitta-llm-context.py": {
"command": "uvx",
"args": [
"--from",
"llm-context",
"lc-mcp"
]
}
}
}You can run this MCP server to enable AI agents and chat interfaces to access focused, task-specific project context through the llm-context workflow. It streamlines sharing relevant files during development conversations, helps manage context to stay within token limits, and lets agents fetch additional files on demand without manual file copying.
To use this MCP server, you will connect your MCP-enabled client to the stdio server exposed by the llm-context tool. This lets AI agents request focused context, validate rules, and fetch missing files on demand during conversations. You can drive the workflow from either a human or an agent perspective: run local commands to prepare context, or integrate the MCP path into your chat environment to enable seamless file access.
Prerequisites: you need Node.js and a package manager that can install the llm-context tool. You also need a client capable of MCP integration (for example, Claude Desktop with MCP support). Follow these steps to install and start the MCP server.
Install the llm-context tool via your package manager.
Start or verify the MCP server is available to your MCP client by configuring the client to connect to the local stdio server command and arguments shown below.
The MCP integration provides a local server that exposes llm-context functionality through a standard input/output channel. Use the following configuration in your MCP client to connect to the server.
{
"mcpServers": {
"llm_context_mcp": {
"type": "stdio",
"command": "uvx",
"args": ["--from", "llm-context", "lc-mcp"]
}
}
}MCP enables AI agents to access additional files on demand, and humans can validate rules and share context directly through the MCP channel. Use the provided commands to explore the codebase, validate tasks, and generate focused context for sub-agents.
Examples shown in the workflow include exploring the codebase with outlines, validating rules, and fetching missing files as needed. These actions help you build precise, task-focused contexts that fit within token budgets while preserving essential project details.
Initialize project configuration for llm-context tooling and MCP integration.
Select files based on the active rule to shape the forthcoming context.
Generate and copy a formatted context snippet for use in chats or MCP flows.
Include prompt instructions in the generated context.
Format the context as a separate message for multi-turn interactions.
Generate context without invoking tools, suitable for non-tool-assisted workflows.
Switch the active rule used to shape the context.
Validate the selected rule's applicability and estimated size before use.
Produce code structure excerpts to help agents understand project layout.
Fetch specific files or implementations on demand through MCP.