home / mcp / global mcp server

Global MCP Server

Local LLM prompt routing and context compression

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "apofenic-mcp-prompt-router": {
      "command": "python",
      "args": [
        "-m",
        "mcp.server"
      ],
      "env": {
        "JIRA_URL": "https://yourcompany.atlassian.net",
        "GITHUB_REPO": "your-default-repo",
        "GITHUB_OWNER": "your-username",
        "JIRA_USERNAME": "[email protected]",
        "JIRA_API_TOKEN": "your-token",
        "MCP_SERVER_HOST": "localhost",
        "MCP_SERVER_PORT": "8000",
        "GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_abcdefghijklmnopqrstuvwxyz"
      }
    }
  }
}

Available tools

compress_kv_cache

Compresses large context windows to reduce memory usage while preserving key semantic information.

route_prompt

Intelligently routes prompts to the most appropriate local LLM based on complexity analysis and heuristics.

process_full_pipeline

Runs the complete compression and routing pipeline end-to-end for a given prompt and context.