Home / MCP / Databricks MCP Server
Provides MCP-based access to Databricks clusters, jobs, notebooks, and DBFS actions via an asyncio-enabled MCP server.
Configuration
View docs{
"mcpServers": {
"databricks_mcp": {
"command": "./start_mcp_server.sh",
"args": [],
"env": {
"DATABRICKS_HOST": "YOUR_DATABRICKS_HOST",
"DATABRICKS_TOKEN": "YOUR_DATABRICKS_TOKEN"
}
}
}
}You are deploying a Model Completion Protocol (MCP) server that provides programmatic access to Databricks clusters, jobs, notebooks, and more. This server lets your LLM-powered tools interact with Databricks resources through the MCP interface, enabling automation and richer tooling inside your workflows.
Use an MCP client to connect to the Databricks MCP Server and invoke tools to manage clusters, jobs, notebooks, and files. The server operates asynchronously for efficient interactions, so your client can issue multiple requests without blocking. You can perform tasks such as listing clusters, creating or terminating clusters, starting stopped clusters, listing and running jobs, listing notebooks, exporting notebooks, listing files in DBFS, and executing SQL statements. Treat each tool as a function you can call from your MCP-enabled client and handle responses within your application logic.
Prerequisites you need before installing the MCP server are Python 3.10 or higher and the uv package manager, which is recommended for MCP servers.
Install the uv tool if you do not have it yet. The commands differ by platform:
# MacOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
# Windows (PowerShell)
irm https://astral.sh/uv/install.ps1 | iexClone the repository and navigate into the project directory, then prepare a Python virtual environment and install dependencies in development mode.
git clone https://github.com/JustTryAI/databricks-mcp-server.git
cd databricks-mcp-server
# Create and activate virtual environment
uv venv
# On Windows
.
.venv\Scripts\activate
# On Linux/Mac
source .venv/bin/activate
# Install dependencies in development mode
uv pip install -e .
# Install development dependencies
uv pip install -e ".[dev]"Configure your Databricks connection details so the MCP server can talk to your Databricks workspace.
# Windows
set DATABRICKS_HOST=https://your-databricks-instance.azuredatabricks.net
set DATABRICKS_TOKEN=your-personal-access-token
# Linux/Mac
export DATABRICKS_HOST=https://your-databricks-instance.azuredatabricks.net
export DATABRICKS_TOKEN=your-personal-access-tokenYou can start the MCP server using the provided startup scripts. This will launch the server and listen for MCP protocol connections.
# Windows
.\start_mcp_server.ps1
# Linux/Mac
./start_mcp_server.shIf you want quick visibility into Databricks resources while developing your integration, you can run helper scripts to list clusters or notebooks.
uv run scripts/show_clusters.py
uv run scripts/show_notebooks.pyReturn a list of all Databricks clusters with their IDs, names, and statuses.
Create a new Databricks cluster with the specified configuration and start it.
Terminate a running Databricks cluster by ID or name.
Fetch detailed information about a specific Databricks cluster.
Start a terminated Databricks cluster to make it active again.
List all Databricks jobs available in the workspace.
Trigger execution of a Databricks job and monitor its progress.
List notebooks within a given workspace directory.
Export a notebook from the workspace for offline access or versioning.
List files and directories under a specified DBFS path.
Execute a SQL statement against Databricks SQL endpoints or clusters.