home / skills / a5c-ai / babysitter / data-flow-analysis-framework

This skill guides designing and implementing data-flow analyses for compiler optimization, including lattice, transfer, fixpoint, and interprocedural

npx playbooks add skill a5c-ai/babysitter --skill data-flow-analysis-framework

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
1.0 KB
---
name: data-flow-analysis-framework
description: Design and implement data-flow analyses for compiler optimization
allowed-tools:
  - Bash
  - Read
  - Write
  - Edit
  - Glob
  - Grep
metadata:
  specialization: computer-science
  domain: science
  category: compiler-optimization
  phase: 6
---

# Data Flow Analysis Framework

## Purpose

Provides expert guidance on designing and implementing data-flow analyses for compiler optimization and program analysis.

## Capabilities

- Forward/backward analysis specification
- Lattice definition and verification
- Transfer function generation
- Fixpoint computation (worklist algorithm)
- Analysis soundness verification
- Interprocedural analysis

## Usage Guidelines

1. **Lattice Design**: Define abstract domain and lattice
2. **Transfer Functions**: Define transfer functions for statements
3. **Analysis Direction**: Specify forward or backward
4. **Fixpoint**: Implement worklist algorithm
5. **Verification**: Verify soundness of analysis

## Tools/Libraries

- LLVM
- GCC internals
- Soot
- WALA

Overview

This skill helps design and implement data-flow analyses for compiler optimization and program analysis. It provides patterns for lattice design, transfer function generation, and fixpoint computation. The goal is to produce sound, maintainable analyses that scale from intraprocedural to interprocedural settings.

How this skill works

You specify the abstract domain and lattice, then encode transfer functions for program statements or instructions. The framework drives a worklist-based fixpoint engine, supporting both forward and backward analyses and common widening/meet operations. It includes checks to verify lattice properties and offers guidance for proving or testing soundness.

When to use it

  • Building compiler optimizations like constant propagation, live variable analysis, or available expressions
  • Implementing static analyses for bug detection or security properties
  • Scaling analyses from single functions to interprocedural settings
  • Prototyping new abstract domains or custom transfer functions
  • Verifying correctness and soundness of existing analyses

Best practices

  • Start with a simple lattice and prove or test monotonicity before adding complexity
  • Keep transfer functions local and side-effect free where possible to simplify reasoning
  • Prefer worklist algorithms with change-detection to avoid unnecessary recomputation
  • Use widening or bound iteration depth for domains with infinite ascending chains
  • Write unit tests that exercise lattice joins, edge cases, and convergence

Example use cases

  • Implementing forward constant propagation using a constant/unknown lattice and statement-level transfer functions
  • Writing a backward live-variable analysis to drive register allocation or dead code elimination
  • Designing an interprocedural analysis with summaries for scalable whole-program reasoning
  • Verifying that a nullness analysis is sound by checking lattice ordering and transfer monotonicity
  • Porting canonical analyses to a new IR by reusing lattice and fixpoint components

FAQ

Does this support both forward and backward analyses?

Yes — the framework models direction explicitly and reuses the same lattice and fixpoint machinery for either case.

How do I handle infinite domains?

Use widening, limit iteration counts, or design a finite abstraction to ensure termination; the framework documents common strategies.