home / mcp / mcp reasoning engine server
Provides a REST API and CLI to perform domain-aware reasoning using Claude with MCP tools for search, validation, and rubric evaluation.
Configuration
View docs{
"mcpServers": {
"arslanmanzoorr-mcp": {
"url": "http://localhost:8000",
"headers": {
"MCP_HOST": "0.0.0.0",
"MCP_PORT": "8000",
"ANTHROPIC_API_KEY": "YOUR_API_KEY_HERE"
}
}
}
}You run an MCP server that combines Claude-based reasoning with structured MCP tools to perform domain-specific reasoning across legal, health, and science data. It exposes an HTTP API for easy integration and can be deployed in Docker for production usage. You use it to search knowledge, validate outputs against a schema, and evaluate results with domain rubrics, all within a secure, API-driven workflow.
You interact with the MCP server through the HTTP API or by running a local CLI/SDK client. Start by hosting the HTTP API server, or use the Python client directly in your application to send questions and receive structured, validated results.
Prerequisites: Python 3.8 or newer and an Anthropic Claude API key.
1) Create a Python virtual environment and activate it on your platform.
python -m venv .venv
# Windows
.venv\Scripts\activate
# Linux/Mac
source .venv/bin/activate2) Install the required Python packages.
pip install -r requirements.txt3) Set your Anthropic API key and optional MCP server settings.
# Windows PowerShell
$env:ANTHROPIC_API_KEY = "YOUR_API_KEY"
# Linux/Mac
export ANTHROPIC_API_KEY="YOUR_API_KEY"
# Optional: port and host for the MCP HTTP API
export MCP_PORT=8000
export MCP_HOST=0.0.0.04) Run the HTTP API server to expose the MCP endpoints.
python mcp_api_server.pyThe server supports three core MCP tools: knowledge search, schema validation, and rubric evaluation. It provides an HTTP API for integration and can be run in a Docker container for production deployments. The API documentation is available at the server root once it is running.
Searches RAG documents for relevant domain-specific information and returns results with sources, titles, and content.
Validates the reasoning output against the universal reasoning schema and reports status and errors.
Evaluates the reasoning output against domain-specific rubrics, returning scores, pass/fail status, and human review flags.