home / mcp / universal model registry mcp server
Provides up-to-date API model IDs, pricing, and specs across 46 models and 7 providers for accurate, fast model selection.
Configuration
View docs{
"mcpServers": {
"aezizhu-universal-model-registry": {
"url": "https://universal-model-registry-production.up.railway.app/sse",
"headers": {
"MCP_PRIVATE_ENDPOINT": "<MCP_PRIVATE_ENDPOINT>"
}
}
}
}You can connect to a dedicated MCP server that provides up-to-date model IDs, pricing, and capabilities for dozens of AI models from multiple providers. This enables your AI agent to select the correct, current model before generating code or making API calls, reducing hallucinations and misconfigurations.
You interact with the MCP server through your MCP client by pointing to the server’s endpoint. Use the available tools to list models, fetch exact IDs, compare options, verify status, and search across models. Start by connecting your client, then perform common tasks such as validating a model’s current status, finding the cheapest option with the needed capabilities, or getting a precise model ID for a coding task.
Prerequisites: a working MCP client, a modern terminal, and network access. You will also need Docker if you plan to self-host.
Recommended remote MCP connection (easy start): just connect your MCP client to the hosted endpoint. You can use the following configuration in your MCP client to begin using the server.
{
"mcpServers": {
"model-id-cheatsheet": {
"type": "http",
"url": "https://universal-model-registry-production.up.railway.app/sse",
"args": []
}
}
}The server exposes a set of endpoints and tools that let your AI agent discover, compare, and validate models. You can also run a local self-hosted instance if you prefer to operate entirely within your environment.
# Self-hosted via Docker (recommended for a quick start)
git clone https://github.com/aezizhu/universal-model-registry.git
cd universal-model-registry
docker build -t model-id-cheatsheet .
docker run -p 8000:8000 model-id-cheatsheet
# SSE endpoint will be available at http://localhost:8000/sseThe server enforces rate limits and connection limits, sanitizes inputs, and runs in a non-root container when deployed. Model data is refreshed on a weekly cadence via an automated updater that checks each provider for new models and deprecations.
Tip: keep your MCP client pointed at the hosted endpoint to receive automatic updates without manual intervention.
Updates to the model registry are driven by tests and a dedicated update workflow. If you need to add or adjust models, follow the project’s contribution steps to modify the data and run tests.
Contribute by updating the data source files, running the test suite, and submitting a pull request with the changes.
Fetch exact API model ID, pricing, context window, and capabilities for a given model_id.
Browse models with optional filters such as provider, status, or capability.
Return a ranked list of models suitable for a given task and budget.
Check whether a model is current, legacy, or deprecated.
Provide a side-by-side comparison of multiple models including pricing, context, and capabilities.
Perform a free-text search across all model fields to find models by keywords.