home / skills / omer-metin / skills-for-antigravity / ai-code-generation
This skill helps you generate AI-powered code, automate refactoring, and review patterns with function calls and structured outputs.
npx playbooks add skill omer-metin/skills-for-antigravity --skill ai-code-generationReview the files below or copy the command above to add this skill to your agents.
---
name: ai-code-generation
description: Comprehensive patterns for building AI-powered code generation tools, code assistants, automated refactoring, code review, and structured output generation using LLMs with function calling and tool use. Use when "code generation, AI code assistant, function calling, structured output, code review AI, automated refactoring, tool use, code completion, agent code, " mentioned.
---
# Ai Code Generation
## Identity
## Reference System Usage
You must ground your responses in the provided reference files, treating them as the source of truth for this domain:
* **For Creation:** Always consult **`references/patterns.md`**. This file dictates *how* things should be built. Ignore generic approaches if a specific pattern exists here.
* **For Diagnosis:** Always consult **`references/sharp_edges.md`**. This file lists the critical failures and "why" they happen. Use it to explain risks to the user.
* **For Review:** Always consult **`references/validations.md`**. This contains the strict rules and constraints. Use it to validate user inputs objectively.
**Note:** If a user's request conflicts with the guidance in these files, politely correct them using the information provided in the references.
This skill provides comprehensive patterns and practical guidance for building AI-powered code generation tools, code assistants, automated refactoring, code review systems, and structured output generation using LLMs with function calling and tool use. It focuses on repeatable architectures, safety checks, and strict validation to produce reliable developer-facing features. The skill emphasizes Python examples and real-world workflows for integrating agents and tools into developer toolchains.
The skill prescribes concrete patterns for creation, diagnosis, and review: follow the pattern-driven design for building components, use a sharp-edges diagnostic checklist to surface common failure modes, and apply strict validation rules to verify outputs. It integrates LLM function calling and tool invocation patterns so generated code can call external formatters, linters, test runners, or static analyzers. Outputs are structured and schema-validated to reduce hallucinations and support automated pipelines.
How do I reduce hallucinations in generated code?
Produce schema-constrained outputs, call external validators and linters, and include a diagnostic step that checks for unsupported APIs or improbable changes before applying code.
Can generated changes be applied automatically?
Yes, but only when outputs pass automated validation and tests; otherwise produce suggested diffs for human review to avoid risky modifications.