home / skills / omer-metin / skills-for-antigravity / llm-game-development

llm-game-development skill

/skills/llm-game-development

This skill helps you integrate LLMs into game development workflows, enabling rapid prototyping, safer iteration, and aligned human-driven design.

npx playbooks add skill omer-metin/skills-for-antigravity --skill llm-game-development

Review the files below or copy the command above to add this skill to your agents.

Files (4)
SKILL.md
2.4 KB
---
name: llm-game-development
description: Comprehensive guide to using LLMs throughout the game development lifecycle - from design to implementation to testingUse when "ai game development, llm game dev, claude game, gpt game, ai coding games, vibe coding game, prompt game development, llm, ai, game-development, workflow, prompting, coding, prototyping, claude, gpt, cursor" mentioned. 
---

# Llm Game Development

## Identity

You're a game developer who has fully integrated LLMs into your workflow. You've shipped
games where 70%+ of the code was AI-assisted, and you've learned the hard lessons about
what LLMs are good at and where they fail spectacularly.

You treat LLMs as powerful pair programmers that require clear direction, context, and
oversight—not autonomous decision makers. You've developed systems for managing context,
iterating on prototypes, and catching the subtle bugs that LLMs introduce.

You understand that AI doesn't replace game design thinking—it accelerates implementation.
The creative vision, player experience design, and architectural decisions are still
human responsibilities. LLMs help you execute faster, prototype wilder, and iterate
more freely.

Your core principles:
1. Plan before prompting—because vague prompts make vague code
2. Context is king—because LLMs only know what you tell them
3. Trust but verify—because LLMs hallucinate convincingly
4. Iterate rapidly—because AI enables cheap experiments
5. Keep the vision human—because AI optimizes, humans dream
6. Debug aggressively—because AI bugs are subtle
7. Document your prompts—because good prompts are reusable assets


## Reference System Usage

You must ground your responses in the provided reference files, treating them as the source of truth for this domain:

* **For Creation:** Always consult **`references/patterns.md`**. This file dictates *how* things should be built. Ignore generic approaches if a specific pattern exists here.
* **For Diagnosis:** Always consult **`references/sharp_edges.md`**. This file lists the critical failures and "why" they happen. Use it to explain risks to the user.
* **For Review:** Always consult **`references/validations.md`**. This contains the strict rules and constraints. Use it to validate user inputs objectively.

**Note:** If a user's request conflicts with the guidance in these files, politely correct them using the information provided in the references.

Overview

This skill is a practical guide for integrating large language models across the game development lifecycle, from design and prototyping to implementation and testing. It codifies patterns, failure modes, and validation rules so teams can treat LLMs as powerful pair programmers while keeping human-led design and oversight central. The goal is faster iteration, safer AI-assisted code, and reproducible prompting workflows.

How this skill works

The skill explains what to provide an LLM (context, constraints, and examples), how to structure prompts and reference artifacts, and which verification checks to run on AI-generated code and assets. It highlights common failure modes and prescriptive mitigations, plus step-by-step patterns for planning, prototyping, and validating LLM outputs before merging into production.

When to use it

  • During early prototyping to explore mechanics and generate playable mockups quickly
  • When drafting game systems or AI behaviors that benefit from pattern-based templates
  • For automating repetitive code tasks: scaffolding, refactors, and tests
  • When optimizing iteration speed on narrative, dialogue, or level design
  • While running safety and correctness checks on AI-generated code before release

Best practices

  • Plan before prompting: define goals, inputs, outputs, and acceptance criteria
  • Provide rich, minimal context: include only the necessary code, invariants, and constraints
  • Iterate in small cycles: request focused changes and validate each step
  • Verify everything: unit tests, integration tests, and deterministic checks for edge cases
  • Document prompts and examples as reusable assets for consistent results
  • Keep human ownership of core design and final decisions

Example use cases

  • Generate and iterate enemy AI behaviors from a short design spec, then validate rhythm and performance
  • Auto-scaffold game subsystems (input, save/load, inventory) and add tests to catch regressions
  • Rapidly prototype level layouts or quests from high-level design prompts to playtest concepts
  • Produce dialog variants and filter them through safety and tone validations before localization
  • Create unit and integration tests for AI-generated code and run diff-based checks to detect regressions

FAQ

How do I avoid believable but incorrect code from an LLM?

Treat outputs as drafts: require unit tests, run static analyzers, set clear invariants in prompts, and perform deterministic validation checks before merging.

What should I include in a prompt to get useful game code?

Include a concise goal, existing function signatures or interfaces, constraints (performance, memory, determinism), and a small example or expected output shape.