home / mcp / bpftrace mcp server

Bpftrace MCP Server

MCP server: using eBPF to tracing your kernel

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "eunomia-bpf-mcptrace": {
      "command": "bpftrace-mcp-server",
      "args": [],
      "env": {
        "BPFTRACE_PASSWD": "YOUR_SUDO_PASSWORD"
      }
    }
  }
}

You can run a Rust-based MCP server that bridges AI assistants with bpftrace kernel tracing. It lets you discover system probes, execute traces, and retrieve results securely, without giving AI direct root access. This guide walks you through practical steps to install, run, and use the server for kernel tracing tasks.

How to use

Connect to the MCP server from your MCP client and start by listing available probes to understand what you can observe. Then pick a trace point or a ready-made script to execute a trace. You can run long-running traces and fetch results later, which is ideal for intermittent production issues. Use the information and examples provided by the server to craft traces that fit your debugging needs.

How to install

Prerequisites: install the Rust toolchain and ensure bpftrace is available on your system.

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
```
```bash
sudo apt-get install bpftrace  # Ubuntu/Debian
# or
sudo dnf install bpftrace      # Fedora

Install from crates.io

Install the MCP server binary from crates.io to run directly.

cargo install bpftrace-mcp-server

Build from source

If you prefer building from source, clone the repository, build in release mode, and run the produced binary.

git clone https://github.com/yunwei37/MCPtrace
cd MCPtrace
cargo build --release

Run the server

Use the built binary to start the MCP server. Choose the standard runtime or a development run depending on your workflow.

# If installed via cargo
bpftrace-mcp-server

# If built from source
./target/release/bpftrace-mcp-server

# Development mode from source
cargo run --release

Configuration and setup guidance

Configure how clients authenticate and how traces are executed by adjusting environment settings and startup options. The server supports a secure gateway model so that AI clients do not obtain direct root access. Refer to the setup instructions for specific environment variables and security considerations.

Available tools

list_probes

List available kernel probe points, with optional filtering to find probes that match your observability needs.

bpf_info

Retrieve system information including kernel helpers, features, map types, and probe types to guide trace creation.

exec_program

Execute a bpftrace script or one-liner and obtain an execution identifier for later retrieval of results.

get_result

Poll for and retrieve the output of a previously started trace execution using the provided execution identifier.