home / mcp / llm-context mcp server

llm-context MCP Server

Share code with LLMs via Model Context Protocol or clipboard. Rule-based customization enables easy switching between different tasks (like code review and documentation). Includes smart code outlining.

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "cyberchitta-llm-context.py": {
      "command": "uvx",
      "args": [
        "--from",
        "llm-context",
        "lc-mcp"
      ]
    }
  }
}

You can run this MCP server to enable AI agents and chat interfaces to access focused, task-specific project context through the llm-context workflow. It streamlines sharing relevant files during development conversations, helps manage context to stay within token limits, and lets agents fetch additional files on demand without manual file copying.

How to use

To use this MCP server, you will connect your MCP-enabled client to the stdio server exposed by the llm-context tool. This lets AI agents request focused context, validate rules, and fetch missing files on demand during conversations. You can drive the workflow from either a human or an agent perspective: run local commands to prepare context, or integrate the MCP path into your chat environment to enable seamless file access.

How to install

Prerequisites: you need Node.js and a package manager that can install the llm-context tool. You also need a client capable of MCP integration (for example, Claude Desktop with MCP support). Follow these steps to install and start the MCP server.

Install the llm-context tool via your package manager.

Start or verify the MCP server is available to your MCP client by configuring the client to connect to the local stdio server command and arguments shown below.

Configuration and MCP setup

The MCP integration provides a local server that exposes llm-context functionality through a standard input/output channel. Use the following configuration in your MCP client to connect to the server.

{
  "mcpServers": {
    "llm_context_mcp": {
      "type": "stdio",
      "command": "uvx",
      "args": ["--from", "llm-context", "lc-mcp"]
    }
  }
}

Agent and human workflows with MCP

MCP enables AI agents to access additional files on demand, and humans can validate rules and share context directly through the MCP channel. Use the provided commands to explore the codebase, validate tasks, and generate focused context for sub-agents.

Examples of common MCP actions

Examples shown in the workflow include exploring the codebase with outlines, validating rules, and fetching missing files as needed. These actions help you build precise, task-focused contexts that fit within token budgets while preserving essential project details.

Available tools

lc-init

Initialize project configuration for llm-context tooling and MCP integration.

lc-select

Select files based on the active rule to shape the forthcoming context.

lc-context

Generate and copy a formatted context snippet for use in chats or MCP flows.

lc-context -p

Include prompt instructions in the generated context.

lc-context -m

Format the context as a separate message for multi-turn interactions.

lc-context -nt

Generate context without invoking tools, suitable for non-tool-assisted workflows.

lc-set-rule

Switch the active rule used to shape the context.

lc-preview

Validate the selected rule's applicability and estimated size before use.

lc-outlines

Produce code structure excerpts to help agents understand project layout.

lc-missing

Fetch specific files or implementations on demand through MCP.