home / skills / astronomer / agents / debugging-dags

debugging-dags skill

/skills/debugging-dags

This skill performs comprehensive DAG failure diagnosis and root cause analysis for Airflow pipelines, delivering actionable remediation and prevention

npx playbooks add skill astronomer/agents --skill debugging-dags

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
3.3 KB
---
name: debugging-dags
description: Comprehensive DAG failure diagnosis and root cause analysis. Use for complex debugging requests requiring deep investigation like "diagnose and fix the pipeline", "full root cause analysis", "why is this failing and how to prevent it". For simple debugging ("why did dag fail", "show logs"), the airflow entrypoint skill handles it directly. This skill provides structured investigation and prevention recommendations.
---

# DAG Diagnosis

You are a data engineer debugging a failed Airflow DAG. Follow this systematic approach to identify the root cause and provide actionable remediation.

## Running the CLI

Run all `af` commands using uvx (no installation required):

```bash
uvx --from astro-airflow-mcp af <command>
```

Throughout this document, `af` is shorthand for `uvx --from astro-airflow-mcp af`.

---

## Step 1: Identify the Failure

If a specific DAG was mentioned:
- Run `af runs diagnose <dag_id> <dag_run_id>` (if run_id is provided)
- If no run_id specified, run `af dags stats` to find recent failures

If no DAG was specified:
- Run `af health` to find recent failures across all DAGs
- Check for import errors with `af dags errors`
- Show DAGs with recent failures
- Ask which DAG to investigate further

## Step 2: Get the Error Details

Once you have identified a failed task:

1. **Get task logs** using `af tasks logs <dag_id> <dag_run_id> <task_id>`
2. **Look for the actual exception** - scroll past the Airflow boilerplate to find the real error
3. **Categorize the failure type**:
   - **Data issue**: Missing data, schema change, null values, constraint violation
   - **Code issue**: Bug, syntax error, import failure, type error
   - **Infrastructure issue**: Connection timeout, resource exhaustion, permission denied
   - **Dependency issue**: Upstream failure, external API down, rate limiting

## Step 3: Check Context

Gather additional context to understand WHY this happened:

1. **Recent changes**: Was there a code deploy? Check git history if available
2. **Data volume**: Did data volume spike? Run a quick count on source tables
3. **Upstream health**: Did upstream tasks succeed but produce unexpected data?
4. **Historical pattern**: Is this a recurring failure? Check if same task failed before
5. **Timing**: Did this fail at an unusual time? (resource contention, maintenance windows)

Use `af runs get <dag_id> <dag_run_id>` to compare the failed run against recent successful runs.

## Step 4: Provide Actionable Output

Structure your diagnosis as:

### Root Cause
What actually broke? Be specific - not "the task failed" but "the task failed because column X was null in 15% of rows when the code expected 0%".

### Impact Assessment
- What data is affected? Which tables didn't get updated?
- What downstream processes are blocked?
- Is this blocking production dashboards or reports?

### Immediate Fix
Specific steps to resolve RIGHT NOW:
1. If it's a data issue: SQL to fix or skip bad records
2. If it's a code issue: The exact code change needed
3. If it's infra: Who to contact or what to restart

### Prevention
How to prevent this from happening again:
- Add data quality checks?
- Add better error handling?
- Add alerting for edge cases?
- Update documentation?

### Quick Commands
Provide ready-to-use commands:
- To clear and rerun failed tasks: `af tasks clear <dag_id> <run_id> <task_ids> -D`

Overview

This skill performs comprehensive DAG failure diagnosis and root cause analysis for Apache Airflow pipelines. It guides a systematic investigation from identifying the failing run to delivering immediate fixes and long-term prevention recommendations. Use it for complex, multi-faceted debugging that requires deep context and structured remediation.

How this skill works

The skill inspects DAG run and task state, fetches and parses task logs to extract the real exception, and classifies failures into data, code, infrastructure, or dependency issues. It compares failed runs to recent successful runs, checks recent code or data changes, and assembles a focused Root Cause, Impact Assessment, Immediate Fix, and Prevention plan. It also supplies ready-to-run CLI commands to validate fixes and rerun tasks.

When to use it

  • When a pipeline failure needs full root cause analysis beyond simple log viewing
  • When failures recur intermittently or across multiple DAGs
  • When the cause is unclear and requires correlation of logs, run metadata, and upstream health
  • When you must provide remediation steps and prevention guidance to engineers or SREs
  • When a production report or downstream process is blocked and you need an actionable recovery plan

Best practices

  • Start by identifying the specific failed DAG run before deep investigation
  • Always extract the actual exception from task logs, ignoring Airflow boilerplate
  • Classify the failure (data, code, infra, dependency) to focus remediation
  • Compare the failed run to recent successful runs and recent code/data changes
  • Provide precise, executable remediation: SQL fixes, code patches, or infra actions, plus commands to clear and rerun tasks
  • Include prevention: data quality checks, retries/backoff, alerting, and documentation updates

Example use cases

  • Diagnose a DAG that intermittently fails with downstream data inconsistencies
  • Perform a full root cause analysis after a production report went stale
  • Investigate repeated import errors after a deploy and recommend a rollback or patch
  • Analyze resource exhaustion errors and produce an infra remediation and monitoring plan
  • Produce step-by-step remediation commands and rerun instructions for on-call engineers

FAQ

How do I find which DAG run to investigate first?

Run a cluster health or recent failures summary to surface recent failed runs, then target the run with the largest impact or most recent failure.

What immediate commands will help recover a failed task?

Use task log retrieval to find the error, apply a targeted fix (SQL patch, code change, or infra restart), then clear and rerun the failed tasks with the task clear command.