home / skills / a5c-ai / babysitter / tla-plus-generator

This skill helps you generate and verify TLA+ specifications for distributed systems, including PlusCal translation, model checking, and refinement mapping.

npx playbooks add skill a5c-ai/babysitter --skill tla-plus-generator

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
1.1 KB
---
name: tla-plus-generator
description: Generate and analyze TLA+ specifications for distributed systems verification
allowed-tools:
  - Bash
  - Read
  - Write
  - Edit
  - Glob
  - Grep
metadata:
  specialization: computer-science
  domain: science
  category: distributed-systems
  phase: 6
---

# TLA+ Generator

## Purpose

Provides expert guidance on generating TLA+ specifications for distributed systems design and verification.

## Capabilities

- TLA+ module generation from protocol description
- Invariant and temporal property specification
- State space exploration configuration
- PlusCal to TLA+ translation
- Model checking execution
- Refinement mapping

## Usage Guidelines

1. **System Modeling**: Model system components and state
2. **Action Specification**: Define system actions/transitions
3. **Property Specification**: Specify safety and liveness properties
4. **Model Checking**: Configure and run TLC model checker
5. **Refinement**: Relate abstract and concrete specifications

## Tools/Libraries

- TLA+ Toolbox
- TLC model checker
- TLAPS proof system
- PlusCal

Overview

This skill generates and analyzes TLA+ specifications to support rigorous design and verification of distributed systems. It turns protocol descriptions or PlusCal code into TLA+ modules, helps express invariants and temporal properties, and guides model checking and refinement mapping. The goal is to reduce specification errors and accelerate verification-led design.

How this skill works

Provide a protocol description, system model, or PlusCal algorithm; the skill translates that input into a TLA+ module and annotates state variables, actions, and fairness conditions. It helps formulate safety and liveness properties, prepares TLC model configurations, and can suggest refinement mappings between abstract and concrete designs. It also offers guidance on using TLAPS for proofs and interpreting model checker counterexamples.

When to use it

  • Designing or documenting distributed protocols (consensus, replication, leader election)
  • Validating safety and liveness properties before implementation
  • Translating PlusCal algorithms into TLA+ modules for model checking
  • Configuring TLC to explore state spaces and diagnose counterexamples
  • Creating or checking refinement mappings between spec layers

Best practices

  • Model components and state concisely; keep state variables minimal to control state-space size
  • Express actions atomically and name them to make invariants easier to track
  • Write clear safety invariants and separate liveness properties with explicit fairness
  • Start model checking with small parameters and gradually increase to find real issues early
  • Use refinement mappings to connect abstract designs to optimized implementations and verify correctness

Example use cases

  • Generate a TLA+ module from a leader election protocol described in prose and run TLC to find possible split-brain scenarios
  • Translate a PlusCal concurrent algorithm into TLA+ and verify mutual exclusion and absence of deadlock
  • Configure TLC to explore failure patterns in a replicated log and prioritize counterexamples
  • Formulate refinement mappings to show an optimized message-passing implementation refines an abstract specification
  • Prepare invariants and proof outlines for TLAPS to discharge critical safety properties

FAQ

Can the skill run TLC model checking directly?

It prepares and configures TLC model inputs and explains how to run TLC, but running the model checker requires a local or remote TLC environment.

Does it handle large or infinite state spaces?

It helps mitigate state explosion by proposing abstraction, symmetry reduction, and parameter bounding, but very large or infinite spaces still require careful modeling and human judgment.