home / mcp / cockroachdb mcp server
Enables AI assistants to query CockroachDB clusters via natural language with schema discovery, CRUD, transactions, and health monitoring.
Configuration
View docs{
"mcpServers": {
"bpamiri-cockroachdb-mcp": {
"command": "cockroachdb-mcp",
"args": [],
"env": {
"CRDB_HOST": "your-cluster.cockroachlabs.cloud",
"CRDB_USER": "your-username",
"CRDB_CLUSTER": "your-cluster-id",
"CRDB_DATABASE": "your-database",
"CRDB_PASSWORD": "your-password",
"CRDB_READ_ONLY": "true"
}
}
}
}You can run a CockroachDB MCP Server to let AI assistants query and interact with your CockroachDB cluster using natural language. It exposes powerful capabilities like schema discovery, CRUD operations, cluster health checks, multi-region queries, and safe transaction handling, all driven through an MCP client.
Connect to the CockroachDB MCP Server from your MCP client by starting the local server process and providing the required environment details for your CockroachDB cluster. Once connected, you can ask questions in plain language to explore schemas, read data, modify rows within safety limits, and monitor cluster health. Common workflows include asking for available tables, describing a specific table, querying top results, checking cluster status, and performing CRUD operations within configured safety constraints.
Example usage patterns you can try after you have the server running: - Describe what tables exist in your database - Inspect the columns of a specific table - Retrieve a subset of rows from a table with a limit - Begin a transaction, perform inserts or updates, then commit or rollback - Check overall cluster health and node status - Export query results to JSON or CSV for offline analysis
pip install cockroachdb-mcpPrepare the environment for your CockroachDB connection by setting the required variables in your shell or a dedicated environment file. You will configure the MCP server to point at your CockroachDB cluster and set the default access mode.
# Example environment variables (fill with your values)
export CRDB_HOST=your-cluster.cockroachlabs.cloud
export CRDB_USER=your-username
export CRDB_PASSWORD=your-password
export CRDB_DATABASE=your-database
export CRDB_CLUSTER=your-cluster-id
export CRDB_READ_ONLY=trueStart the MCP server locally. The server is launched as a standard Python process that the MCP client can connect to using the provided configuration. If you prefer a dedicated HTTP interface or streaming support for clients like Claude, start the appropriate server mode as described in deployment options.
Configuration Inspect and safety controls help prevent unintended changes. You can enable read-only mode to block write operations, set a maximum number of rows returned per query, and define a blocklist of disallowed commands. The server supports transaction semantics, so you can begin a transaction, perform reads and writes, and either commit or rollback as needed.
Deployment options include running locally or exposing an HTTP/SSE or streamable HTTP endpoint for clients that require real-time streaming responses. You can also integrate OAuth for SSO with Claude.ai when deploying as a Custom Connector.
Security considerations include ensuring credentials are not logged, enforcing SSL/TLS encryption, and using robust access controls for your cluster. Configure environment variables and runtime options to align with your security requirements.
Example configuration snippet for the MCP server in a client workflow demonstrates the required runtime command and environment settings. Use this as a template to connect your Claude Desktop or other MCP clients.
Establish a connection to the CockroachDB cluster from the MCP client.
Close an active MCP connection to the cluster.
Retrieve current health and status information for the CockroachDB cluster.
List all databases available in the connected cluster.
List tables and views in the selected database.
Get column information for a specific table.
Run SELECT queries and return results.
Check query safety prior to execution.
Read rows by key or filter.
Insert a new row into a table.
Update existing rows in a table.
Delete rows by key.
Start a new transaction.
Commit the current transaction.
Rollback the current transaction.
Export query results to JSON.
Export query results to CSV.
Persist learned information about the cluster.
Retrieve all saved knowledge.