home / skills / omer-metin / skills-for-antigravity / ai-code-generation

ai-code-generation skill

/skills/ai-code-generation

This skill helps you generate AI-powered code, automate refactoring, and review patterns with function calls and structured outputs.

npx playbooks add skill omer-metin/skills-for-antigravity --skill ai-code-generation

Review the files below or copy the command above to add this skill to your agents.

Files (4)
SKILL.md
1.2 KB
---
name: ai-code-generation
description: Comprehensive patterns for building AI-powered code generation tools, code assistants, automated refactoring, code review, and structured output generation using LLMs with function calling and tool use. Use when "code generation, AI code assistant, function calling, structured output, code review AI, automated refactoring, tool use, code completion, agent code, " mentioned. 
---

# Ai Code Generation

## Identity



## Reference System Usage

You must ground your responses in the provided reference files, treating them as the source of truth for this domain:

* **For Creation:** Always consult **`references/patterns.md`**. This file dictates *how* things should be built. Ignore generic approaches if a specific pattern exists here.
* **For Diagnosis:** Always consult **`references/sharp_edges.md`**. This file lists the critical failures and "why" they happen. Use it to explain risks to the user.
* **For Review:** Always consult **`references/validations.md`**. This contains the strict rules and constraints. Use it to validate user inputs objectively.

**Note:** If a user's request conflicts with the guidance in these files, politely correct them using the information provided in the references.

Overview

This skill provides comprehensive patterns and practical guidance for building AI-powered code generation tools, code assistants, automated refactoring, code review systems, and structured output generation using LLMs with function calling and tool use. It focuses on repeatable architectures, safety checks, and strict validation to produce reliable developer-facing features. The skill emphasizes Python examples and real-world workflows for integrating agents and tools into developer toolchains.

How this skill works

The skill prescribes concrete patterns for creation, diagnosis, and review: follow the pattern-driven design for building components, use a sharp-edges diagnostic checklist to surface common failure modes, and apply strict validation rules to verify outputs. It integrates LLM function calling and tool invocation patterns so generated code can call external formatters, linters, test runners, or static analyzers. Outputs are structured and schema-validated to reduce hallucinations and support automated pipelines.

When to use it

  • Building an AI code assistant or IDE plugin that generates, explains, or refactors code.
  • Automating code reviews with consistent, reproducible suggestions and checks.
  • Designing pipelines that require LLM-driven function calls or external tool orchestration.
  • Generating structured artifacts (APIs, JSON schemas, config files) that must pass validation.
  • Creating agent-based automation where tools, tests, and validators are invoked programmatically.

Best practices

  • Always design generation around explicit patterns and schema-first outputs to avoid ambiguity.
  • Use a diagnostic checklist to detect hallucinations, unsafe changes, or context drift early.
  • Validate every LLM-generated artifact with automated validators and unit tests before applying changes.
  • Limit tool access and require explicit intent and checks before invoking write operations.
  • Log and version generated outputs so refactors and suggestions are auditable and reversible.

Example use cases

  • An IDE assistant that suggests refactorings and applies them via a vetted formatter and test-runner.
  • A CI step that uses an LLM to propose fixes for lint failures and creates PR drafts with validated patches.
  • A code-generation API that produces client SDKs from interface definitions and validates output against schemas.
  • An automated reviewer that flags security or correctness issues using function calls to static analyzers.
  • A developer bot that generates implementation stubs, runs tests, and returns structured pass/fail reports.

FAQ

How do I reduce hallucinations in generated code?

Produce schema-constrained outputs, call external validators and linters, and include a diagnostic step that checks for unsupported APIs or improbable changes before applying code.

Can generated changes be applied automatically?

Yes, but only when outputs pass automated validation and tests; otherwise produce suggested diffs for human review to avoid risky modifications.