This MCP server for Trino provides AI models with structured access to Trino's distributed SQL query engine, allowing them to execute SQL queries against your data sources through a standardized protocol.
The easiest way to get started is using Docker Compose:
# Start the server with docker-compose
docker-compose up -d
This will start three services:
For non-containerized usage, you can run the standalone API server:
# Run the standalone API server on port 8008
python llm_trino_api.py
You can verify the API is working by executing a simple query:
# Verify the API is working
curl -X POST "http://localhost:9097/api/query" \
-H "Content-Type: application/json" \
-d '{"query": "SELECT 1 AS test"}'
For simple direct queries, use the command-line tool:
# Simple direct query
python llm_query_trino.py "SELECT * FROM memory.bullshit.real_bullshit_data LIMIT 5"
# Specify a different catalog or schema
python llm_query_trino.py "SELECT * FROM information_schema.tables" memory information_schema
The Docker container provides an API on port 9097:
# Execute a query against the Docker container API
curl -X POST "http://localhost:9097/api/query" \
-H "Content-Type: application/json" \
-d '{"query": "SELECT 1 AS test"}'
For more flexible deployments, use the standalone API:
# Start the API server on port 8008
python llm_trino_api.py
This creates endpoints at:
GET http://localhost:8008/
- API usage infoPOST http://localhost:8008/query
- Execute SQL queriesimport requests
def query_trino(sql_query):
response = requests.post(
"http://localhost:8008/query",
json={"query": sql_query}
)
return response.json()
# Example query
results = query_trino("SELECT job_title, AVG(salary) FROM memory.bullshit.real_bullshit_data GROUP BY job_title ORDER BY AVG(salary) DESC LIMIT 5")
print(results["formatted_results"])
# Generate the bullshit data
python tools/create_bullshit_data.py
# Load the bullshit data into Trino's memory catalog
python load_bullshit_data.py
The test script demonstrates end-to-end MCP interaction:
# Run a complex query against the sample data through MCP
python test_bullshit_query.py
# Run with STDIO transport inside the container
docker exec -i trino_mcp_trino-mcp_1 python -m trino_mcp.server --transport stdio --debug --trino-host trino --trino-port 8080 --trino-user trino --trino-catalog memory
If you encounter 503 errors:
# Rebuild and restart the container
docker-compose stop trino-mcp
docker-compose rm -f trino-mcp
docker-compose up -d trino-mcp
docker logs trino_mcp_trino-mcp_1
curl -s http://localhost:9095/v1/info | jq
If port 8008 is already in use:
# Run with a custom port
python -c "import llm_trino_api; import uvicorn; uvicorn.run(llm_trino_api.app, host='127.0.0.1', port=8009)"
Both Docker Container API (port 9097) and Standalone API (port 8008) offer:
GET /api
- API documentation and usage examplesPOST /api/query
- Execute SQL queries against TrinoExample API request body:
{
"query": "SELECT * FROM memory.bullshit.real_bullshit_data LIMIT 5",
"catalog": "memory",
"schema": "bullshit"
}
There are two ways to add an MCP server to Cursor. The most common way is to add the server globally in the ~/.cursor/mcp.json
file so that it is available in all of your projects.
If you only need the server in a single project, you can add it to the project instead by creating or adding it to the .cursor/mcp.json
file.
To add a global MCP server go to Cursor Settings > MCP and click "Add new global MCP server".
When you click that button the ~/.cursor/mcp.json
file will be opened and you can add your server like this:
{
"mcpServers": {
"cursor-rules-mcp": {
"command": "npx",
"args": [
"-y",
"cursor-rules-mcp"
]
}
}
}
To add an MCP server to a project you can create a new .cursor/mcp.json
file or add it to the existing one. This will look exactly the same as the global MCP server example above.
Once the server is installed, you might need to head back to Settings > MCP and click the refresh button.
The Cursor agent will then be able to see the available tools the added MCP server has available and will call them when it needs to.
You can also explictly ask the agent to use the tool by mentioning the tool name and describing what the function does.