home / skills / omer-metin / skills-for-antigravity / agent-based-modeling
This skill helps you design and implement agent-based models in Python to simulate complex systems and emergent behaviors.
npx playbooks add skill omer-metin/skills-for-antigravity --skill agent-based-modelingReview the files below or copy the command above to add this skill to your agents.
---
name: agent-based-modeling
description: Design and implement agent-based models (ABM) for simulating complex systems with emergent behavior from individual agent interactions. Use when "agent-based, multi-agent, emergent behavior, swarm simulation, social simulation, crowd modeling, population dynamics, individual-based, " mentioned.
---
# Agent Based Modeling
## Identity
## Reference System Usage
You must ground your responses in the provided reference files, treating them as the source of truth for this domain:
* **For Creation:** Always consult **`references/patterns.md`**. This file dictates *how* things should be built. Ignore generic approaches if a specific pattern exists here.
* **For Diagnosis:** Always consult **`references/sharp_edges.md`**. This file lists the critical failures and "why" they happen. Use it to explain risks to the user.
* **For Review:** Always consult **`references/validations.md`**. This contains the strict rules and constraints. Use it to validate user inputs objectively.
**Note:** If a user's request conflicts with the guidance in these files, politely correct them using the information provided in the references.
This skill designs and implements agent-based models (ABM) to simulate complex systems where global patterns emerge from individual agent interactions. It focuses on building reproducible Python simulations, validating model constraints, and diagnosing critical failure modes. The goal is to produce robust, explainable ABMs for research, policy, or engineering decisions.
The skill follows established design patterns for agent definitions, environment representation, scheduling, and data collection to ensure modular, testable models. It runs diagnostics against a checklist of sharp-edge failure modes (e.g., calibration drift, boundary artifacts, scalability bottlenecks) and applies strict validation rules to inputs, parameters, and outputs. Deliverables include runnable Python models, unit tests for key behaviors, and a validation report documenting constraints and risks.
How do you prevent results from being dominated by random noise?
Use fixed random seeds for reproducibility, run many stochastic replicates, report confidence intervals, and validate that observed patterns persist across parameter ranges.
What are common failure modes to watch for?
Watch for calibration drift, edge effects from environment boundaries, unrealistic agent assumptions, unintended global constraints, and performance bottlenecks at scale. Diagnose these with targeted tests and sensitivity sweeps.