home / mcp / bigeye mcp server
Provides tools to query data quality, analyze lineage, and track AI agent data access within the Bigeye platform.
Configuration
View docs{
"mcpServers": {
"bigeyedata-bigeye-mcp-server": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"BIGEYE_API_KEY=your_api_key_here",
"-e",
"BIGEYE_API_URL=https://your-instance.bigeye.com",
"-e",
"BIGEYE_WORKSPACE_ID=your_workspace_id_here",
"-e",
"BIGEYE_DEBUG=false",
"bigeye-mcp-server:latest"
],
"env": {
"BIGEYE_DEBUG": "false",
"BIGEYE_API_KEY": "your_api_key_here",
"BIGEYE_API_URL": "https://your-instance.bigeye.com",
"BIGEYE_WORKSPACE_ID": "your_workspace_id_here"
}
}
}
}You can run the Bigeye MCP Server to interact with the Bigeye Data Observability platform through a configurable, ephemeral MCP service that starts on demand from your Claude Desktop environment. This server exposes a set of tools to manage data quality, lineage, AI agent data access, and incidents, allowing you to programmatically query, analyze, and act on data quality and lineage information without exposing credentials.
Start Claude Desktop and ensure the Bigeye MCP Server is configured in your Claude Desktop setup. The MCP server runs as an ephemeral container and starts automatically when you begin using the Bigeye tools. It stops when you are done, and a fresh instance is used for each session. Use the available tools to fetch data quality issues, analyze table quality, trace data lineage, track agent data access, and manage incidents.
Prerequisites: you need Docker installed on your machine and Claude Desktop configured to launch MCP servers. You will also interact with a local Python development setup if you prefer running the server directly without Docker.
# Quick start: build a local Docker-based MCP server (example)
# 1) Build the Docker image locally
# Note: replace the repository path with the actual image you build
git clone https://github.com/your-org/bigeye-mcp-server.git
cd bigeye-mcp-server
docker build -t bigeye-mcp-server:latest .
# 2) Configure Claude Desktop to use the Docker-based MCP server (see Configuration section)
# Claude Desktop will spin up the container as neededCredentials are required to run the MCP server. Store them securely in your Claude Desktop configuration and do not paste API keys into chat interfaces. The server relies on three main credentials: HIGH-LEVEL API key, API URL, and Workspace ID, plus a debug flag you can enable for troubleshooting.
Environment variables you will provide in your configuration include BIGEYE_API_KEY, BIGEYE_API_URL, BIGEYE_WORKSPACE_ID, and BIGEYE_DEBUG. These are passed to the MCP server container to authorize and connect to your Bigeye instance.
If you run into missing credentials or authentication issues, verify that the environment variables match exactly in your Claude Desktop config and that the values are correctly formatted without trailing slashes for the API URL.
Configure the MCP server as an ephemeral Docker container that Claude Desktop starts on demand. Your credential values are supplied as environment variables in the container launch command.
Security best practices: never paste API keys into chat interfaces, rotate keys regularly, and store credentials in secure configuration files with restricted permissions.
Missing environment variables show a detailed setup instruction when the server starts. Ensure the Claude Desktop config file includes BIGEYE_API_KEY, BIGEYE_API_URL, BIGEYE_WORKSPACE_ID, and BIGEYE_DEBUG as needed.
If authentication fails, confirm that your API key is valid, the workspace ID is correct, and the Bigeye URL is accurate (no trailing slash). Connectivity issues may require checking network access or firewall settings and enabling debug mode.
The server supports a broad set of tools for data quality, lineage, and agent data access management. It is designed to run as a disposable container, ensuring clean sessions and isolation between uses.
Fetch data quality issues with filtering by status, schema names, and pagination
Get issues for a specific table
Perform a comprehensive quality analysis for a table including metrics and issues
Update issue status, priority, or add comments
Merge multiple issues into a single incident
Unmerge issues from incidents
Get AI-powered resolution suggestions
Retrieve lineage graph for a data entity (upstream/downstream/bidirectional)
Get details for a specific lineage node
Get all issues affecting a lineage node
Trace upstream to identify root causes of data issues
Analyze downstream impact of data issues
Complete lineage trace from root cause to impact
Track which tables/columns an AI agent accesses
Commit tracked access to Bigeye's lineage graph
View current tracking status
Clear tracking without committing
Clean up old agent lineage edges
Delete a custom lineage node