home / mcp / mcp kql server

MCP KQL Server

Kusto and Log Analytics MCP server help you execute a KQL (Kusto Query Language) query within an AI prompt, analyze, and visualize the data.

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "4r9un-mcp-kql-server": {
      "command": "python",
      "args": [
        "-m",
        "mcp_kql_server"
      ],
      "env": {
        "PYTHONWARNINGS": "ignore"
      }
    }
  }
}

You can run the MCP KQL Server to translate natural language questions into optimized KQL queries, discover and cache schema context, and execute queries against Azure Data Explorer with AI-assisted insights. This enables you to ask in plain English and receive accurate, context-aware KQL results and rich visualizations with minimal setup.

How to use

You will interact with the MCP KQL Server through an MCP client. Start the server, then use the two main tools exposed by the server: kql_execute for running KQL queries (with AI context) and kql_schema_memory for discovering and caching cluster schemas. Begin by authenticating to Azure, then execute queries in your MCP client by describing the task in natural language. The server will validate the query, load or discover the necessary schema context, execute the KQL, and return results along with context and optional visualizations. If a query errors, you will receive an enhanced error message and, where applicable, AI-driven suggestions to help you refine the query.

Practical usage patterns include: asking for data explorations with AI-generated context, requesting schema discovery for a new cluster, and performing time-based analyses with visual outputs. The flow automatically handles schema loading, caching, and execution, ensuring responses are based on up-to-date schema information.

How to install

Prerequisites you need before installation:

- Python 3.10 or higher

- Azure CLI installed and authenticated (az login)

- Access to an Azure Data Explorer (ADX) cluster

One-time quick start to run the MCP KQL Server locally:

# Option 1: Install from source
git clone https://github.com/4R9UN/mcp-kql-server.git
cd mcp-kql-server
pip install -e .

# Option 2: Install from PyPI
pip install mcp-kql-server

Start the MCP KQL Server using the standard Python module invocation:

python -m mcp_kql_server

The server initializes memory paths automatically, uses optimized defaults, and relies on your Azure CLI credentials for authentication.

Prerequisites and environment

The installation assumes you have Python 3.10+ and access to Azure Data Explorer clusters. You should also have the Azure CLI installed and authenticated.

No additional environment variables are required for a basic setup.

If you use an MCP client, ensure it is configured to communicate with the local stdio server as shown in the client configuration section.

Additional sections

Security and authentication rely on your existing Azure CLI credentials. The server stores no credentials and caches schema locally. Memory for the schema cache is kept on your machine, not transmitted to external services.

Usage notes and troubleshooting help you recover from common issues, such as authentication errors or memory cache issues. If you encounter a problem, you can re-authenticate with Azure and rebuild the local schema cache on demand.

Development and deployment considerations: the server supports production deployment to Azure Container Apps with automated deployment scripts and security best practices, including RBAC and network isolation. If you plan to deploy in a production environment, refer to the deployment guide for detailed steps and architecture diagrams.

Usage examples demonstrate common tasks you can perform with the MCP KQL Server, such as querying the Samples database, discovering and caching cluster schemas, and performing time-based analyses with visual outputs.

Available tools

kql_execute

Executes KQL queries with AI context, supporting multiple output formats and live schema validation.

kql_schema_memory

Discovers and caches cluster schemas, explores databases, and generates AI-driven context and analysis reports.