Home / MCP / LLM Context MCP Server
Provides MCP integration to share and access project context with LLMs using llm-context tools.
Configuration
View docs{
"mcpServers": {
"llm_context": {
"command": "uvx",
"args": [
"--from",
"llm-context",
"lc-mcp"
]
}
}
}You run an MCP server that lets your AI agents access and share relevant project context on demand. This server integrates llm-context so conversations can dynamically retrieve files and excerpts, enabling seamless, context-rich AI collaboration with minimal manual file handling.
To use the llm-context MCP server with your MCP client, configure the server entry so the AI can run llm-context as a local process and communicate through standard input/output. The integration is designed to let the AI request additional context or files during conversations without you manually providing them each time.
A typical MCP configuration for llm-context uses a stdio connection. This runs the uvx command to launch the llm-context MCP component and expose a controlled interface for the AI to request context. Include the following MCP server entry in your client setup: the server name is llm-context and it uses the uvx launcher with arguments that specify the source and MCP entry point.
Prerequisites: ensure you have a runtime able to start MCP servers and your MCP client can host stdio connections. A common setup uses the uvx launcher to start llm-context-related MCP functionality.
1) Install the llm-context package or tool in your environment according to your platform requirements. You will typically install a minimum version that includes the MCP integration feature set used here.
2) Create or navigate to your project workspace and initialize your MCP configuration if you have not already done so.
3) Add the MCP server configuration snippet to enable llm-context communication from your MCP client.
{
"mcpServers": {
"llm-context": {
"command": "uvx",
"args": ["--from", "llm-context", "lc-mcp"]
}
}
}After configuring the MCP server, you can start conversations with your AI and request additional context as needed. The llm-context integration is designed to minimize friction by automatically selecting relevant files and formatting context for quick paste-and-use in chats.
If you are coordinating with multiple tasks, you can reuse the same MQ-style workflow to switch between llm-context contexts or rules, enabling focused context extraction for different project areas.
Initialize project configuration for llm-context MCP integration.
Select files based on the current rule to determine what context to expose to the AI.
Generate and copy the formatted context for use in conversations.
Switch between rule configurations to control what context is included.
Handle file and context requests when additional files are needed during a chat.
Option to send context as a separate prompt when using prompt-based workflows.