Keboola MCP Server enables AI agents and tools to connect directly to your Keboola project, giving them access to data, transformations, SQL queries, and job triggers without additional integration code. It bridges your Keboola resources with modern AI assistants like Claude, Cursor, CrewAI, and more.
Before getting started, you'll need:
uv
package installermacOS/Linux:
# Install using Homebrew
brew install uv
Windows:
# Using the installer script
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
# Or using pip
pip install uv
# Or using winget
winget install --id=astral-sh.uv -e
For additional installation options, visit the official uv documentation.
Before setting up the MCP server, gather these three essential pieces of information:
KBC_STORAGE_TOKEN - Your authentication token for Keboola
KBC_WORKSPACE_SCHEMA - Identifies your workspace in Keboola (required for SQL queries)
Keboola Region URL - Depends on your deployment region:
Region | API URL |
---|---|
AWS North America | https://connection.keboola.com |
AWS Europe | https://connection.eu-central-1.keboola.com |
Google Cloud EU | https://connection.europe-west3.gcp.keboola.com |
Google Cloud US | https://connection.us-east4.gcp.keboola.com |
Azure EU | https://connection.north-europe.azure.keboola.com |
If your Keboola project uses BigQuery:
GOOGLE_APPLICATION_CREDENTIALS
environment variable to the full path of this fileYou can run the Keboola MCP Server in four different ways:
Claude or Cursor automatically starts the MCP server for you - no manual commands needed.
{
"mcpServers": {
"keboola": {
"command": "uvx",
"args": [
"keboola_mcp_server",
"--api-url", "https://connection.YOUR_REGION.keboola.com"
],
"env": {
"KBC_STORAGE_TOKEN": "your_keboola_storage_token",
"KBC_WORKSPACE_SCHEMA": "your_workspace_schema"
}
}
}
}
For BigQuery users, add this line to the "env" section:
"GOOGLE_APPLICATION_CREDENTIALS": "/full/path/to/credentials.json"
Config file locations:
~/Library/Application Support/Claude/claude_desktop_config.json
%APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"keboola": {
"command": "uvx",
"args": [
"keboola_mcp_server",
"--api-url", "https://connection.YOUR_REGION.keboola.com"
],
"env": {
"KBC_STORAGE_TOKEN": "your_keboola_storage_token",
"KBC_WORKSPACE_SCHEMA": "your_workspace_schema"
}
}
}
}
For BigQuery users, add this line to the "env" section:
"GOOGLE_APPLICATION_CREDENTIALS": "/full/path/to/credentials.json"
When using Windows Subsystem for Linux with Cursor AI:
{
"mcpServers": {
"keboola": {
"command": "wsl.exe",
"args": [
"bash",
"-c",
"'source /wsl_path/to/keboola-mcp-server/.env",
"&&",
"/wsl_path/to/keboola-mcp-server/.venv/bin/python -m keboola_mcp_server.cli --transport stdio'"
]
}
}
}
Create a .env
file at /wsl_path/to/keboola-mcp-server/.env
containing:
export KBC_STORAGE_TOKEN="your_keboola_storage_token"
export KBC_WORKSPACE_SCHEMA="your_workspace_schema"
For developing with the MCP server code:
{
"mcpServers": {
"keboola": {
"command": "/absolute/path/to/.venv/bin/python",
"args": [
"-m", "keboola_mcp_server.cli",
"--transport", "stdio",
"--api-url", "https://connection.YOUR_REGION.keboola.com"
],
"env": {
"KBC_STORAGE_TOKEN": "your_keboola_storage_token",
"KBC_WORKSPACE_SCHEMA": "your_workspace_schema"
}
}
}
}
For BigQuery users, add the GOOGLE_APPLICATION_CREDENTIALS line to the "env" section.
Run the server in a terminal for testing:
# Set environment variables
export KBC_STORAGE_TOKEN=your_keboola_storage_token
export KBC_WORKSPACE_SCHEMA=your_workspace_schema
# For BigQuery users
# export GOOGLE_APPLICATION_CREDENTIALS=/full/path/to/credentials.json
# Run with uvx (no installation needed)
uvx keboola_mcp_server --api-url https://connection.YOUR_REGION.keboola.com
# OR, if developing locally
python -m keboola_mcp_server.cli --api-url https://connection.YOUR_REGION.keboola.com
docker pull keboola/mcp-server:latest
# For Snowflake users
docker run -it \
-e KBC_STORAGE_TOKEN="YOUR_KEBOOLA_STORAGE_TOKEN" \
-e KBC_WORKSPACE_SCHEMA="YOUR_WORKSPACE_SCHEMA" \
keboola/mcp-server:latest \
--api-url https://connection.YOUR_REGION.keboola.com
# For BigQuery users (add credentials volume mount)
# docker run -it \
# -e KBC_STORAGE_TOKEN="YOUR_KEBOOLA_STORAGE_TOKEN" \
# -e KBC_WORKSPACE_SCHEMA="YOUR_WORKSPACE_SCHEMA" \
# -e GOOGLE_APPLICATION_CREDENTIALS="/creds/credentials.json" \
# -v /local/path/to/credentials.json:/creds/credentials.json \
# keboola/mcp-server:latest \
# --api-url https://connection.YOUR_REGION.keboola.com
Once configured, you can query your Keboola data through your MCP client.
Start with a simple verification query:
What buckets and tables are in my Keboola project?
Data Exploration:
Data Analysis:
Data Pipelines:
The MCP server provides access to these capabilities (organized by category):
retrieve_buckets
- Lists all storage buckets in your projectget_bucket_detail
- Gets detailed information about a specific bucketretrieve_bucket_tables
- Lists all tables within a specific bucketget_table_detail
- Provides detailed information for a specific tableupdate_bucket_description
- Updates a bucket's descriptionupdate_column_description
- Updates a column descriptionupdate_table_description
- Updates a table's descriptionquery_table
- Executes custom SQL queries against your dataget_sql_dialect
- Identifies whether your workspace uses Snowflake or BigQuery SQLcreate_component_root_configuration
- Creates a component configurationcreate_component_row_configuration
- Creates a component configuration rowcreate_sql_transformation
- Creates an SQL transformation with custom queriesfind_component_id
- Finds component IDs matching a queryget_component
- Gets information about a specific componentget_component_configuration
- Gets information about a specific configurationget_component_configuration_examples
- Retrieves sample configurationsretrieve_component_configurations
- Lists component configurationsretrieve_transformations
- Lists transformation configurationsupdate_component_root_configuration
- Updates a component configurationupdate_component_row_configuration
- Updates a configuration rowupdate_sql_transformation_configuration
- Updates an SQL transformationretrieve_jobs
- Lists and filters jobsget_job_detail
- Gets detailed information about a specific jobstart_job
- Triggers a component or transformation jobdocs_query
- Searches Keboola documentation based on natural language queriesMCP Client | Support Status | Connection Method |
---|---|---|
Claude (Desktop & Web) | ✅ supported, tested | stdio |
Cursor | ✅ supported, tested | stdio |
Windsurf, Zed, Replit | ✅ Supported | stdio |
Codeium, Sourcegraph | ✅ Supported | HTTP+SSE |
Custom MCP Clients | ✅ Supported | HTTP+SSE or stdio |
Issue | Solution |
---|---|
Authentication Errors | Verify KBC_STORAGE_TOKEN is valid |
Workspace Issues | Confirm KBC_WORKSPACE_SCHEMA is correct |
Connection Timeout | Check network connectivity |
There are two ways to add an MCP server to Cursor. The most common way is to add the server globally in the ~/.cursor/mcp.json
file so that it is available in all of your projects.
If you only need the server in a single project, you can add it to the project instead by creating or adding it to the .cursor/mcp.json
file.
To add a global MCP server go to Cursor Settings > MCP and click "Add new global MCP server".
When you click that button the ~/.cursor/mcp.json
file will be opened and you can add your server like this:
{
"mcpServers": {
"cursor-rules-mcp": {
"command": "npx",
"args": [
"-y",
"cursor-rules-mcp"
]
}
}
}
To add an MCP server to a project you can create a new .cursor/mcp.json
file or add it to the existing one. This will look exactly the same as the global MCP server example above.
Once the server is installed, you might need to head back to Settings > MCP and click the refresh button.
The Cursor agent will then be able to see the available tools the added MCP server has available and will call them when it needs to.
You can also explictly ask the agent to use the tool by mentioning the tool name and describing what the function does.