home / mcp / global mcp server
Local LLM prompt routing and context compression
Configuration
View docs{
"mcpServers": {
"apofenic-mcp-prompt-router": {
"command": "python",
"args": [
"-m",
"mcp.server"
],
"env": {
"JIRA_URL": "https://yourcompany.atlassian.net",
"GITHUB_REPO": "your-default-repo",
"GITHUB_OWNER": "your-username",
"JIRA_USERNAME": "[email protected]",
"JIRA_API_TOKEN": "your-token",
"MCP_SERVER_HOST": "localhost",
"MCP_SERVER_PORT": "8000",
"GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_abcdefghijklmnopqrstuvwxyz"
}
}
}
}Compresses large context windows to reduce memory usage while preserving key semantic information.
Intelligently routes prompts to the most appropriate local LLM based on complexity analysis and heuristics.
Runs the complete compression and routing pipeline end-to-end for a given prompt and context.