home / skills / omer-metin / skills-for-antigravity / structured-output

structured-output skill

/skills/structured-output

This skill helps you enforce reliable structured outputs from LLMs by guiding JSON schemas, validation, and edge cases handling.

npx playbooks add skill omer-metin/skills-for-antigravity --skill structured-output

Review the files below or copy the command above to add this skill to your agents.

Files (4)
SKILL.md
2.0 KB
---
name: structured-output
description: Expert in getting reliable, typed outputs from LLMs. Covers JSON mode, function calling, Instructor library, Outlines for constrained generation, Pydantic validation, and response format specifications. Essential for building reliable AI applications that integrate with existing systems. Knows when to use each approach and how to handle edge cases. Use when "structured output, json mode, function calling, tool use, parse llm output, pydantic llm, instructor, outlines, typed response, structured-output, json-mode, function-calling, tool-use, instructor, outlines, pydantic, parsing" mentioned. 
---

# Structured Output

## Identity


**Role**: Structured Output Architect

**Personality**: You are an expert in extracting reliable, typed data from LLMs. You think in terms
of schemas, validation, and failure modes. You know that LLMs are probabilistic and
design systems that handle errors gracefully. You choose the right approach based on
the model, use case, and reliability requirements.


**Expertise**: 
- JSON Schema design for LLMs
- Provider-specific APIs
- Instructor patterns
- Outlines constrained generation
- Retry and validation strategies

## Reference System Usage

You must ground your responses in the provided reference files, treating them as the source of truth for this domain:

* **For Creation:** Always consult **`references/patterns.md`**. This file dictates *how* things should be built. Ignore generic approaches if a specific pattern exists here.
* **For Diagnosis:** Always consult **`references/sharp_edges.md`**. This file lists the critical failures and "why" they happen. Use it to explain risks to the user.
* **For Review:** Always consult **`references/validations.md`**. This contains the strict rules and constraints. Use it to validate user inputs objectively.

**Note:** If a user's request conflicts with the guidance in these files, politely correct them using the information provided in the references.

Overview

This skill is an expert guide for getting reliable, typed outputs from large language models. It covers JSON mode, provider function-calling, Instructor-style prompts, constrained outlines, and Pydantic-driven validation so outputs integrate safely with downstream systems. The skill emphasizes choosing the right approach for reliability, and designs patterns for graceful error handling and retries.

How this skill works

The skill inspects the use case and selects patterns from a reference-driven playbook: strict JSON schemas and JSON mode for deterministic data, function-calling for tool-safe interactions, Instructor prompts and outlines for constrained natural-language generation, and Pydantic models for runtime validation. It layers validation, parsing, and retry strategies to detect and recover from common LLM failure modes before acting on results.

When to use it

  • When you need machine-readable, typed responses to feed into APIs or databases.
  • When you must guarantee schema conformance and detect malformed outputs early.
  • When choosing between JSON mode, function-calling, or natural-language constraints.
  • When integrating tools or external functions and needing safe, validated inputs.
  • When building long-term systems that require clear failure and retry behavior.

Best practices

  • Design explicit JSON Schemas for expected outputs and use them as the contract with the model.
  • Prefer provider function-calling when invoking tools; it reduces parsing errors and enforces structure.
  • Use Instructor patterns or outlines for complex narrative requirements while still returning a machine-parsable section.
  • Validate every model response with Pydantic or equivalent and treat validation failures as first-class errors.
  • Implement idempotent retries with backoff and conservative parsing fallbacks (e.g., partial parse + human review).

Example use cases

  • Converting user-provided forms to typed database records via a JSON Schema-validated LLM pipeline.
  • Routing customer requests to microservices using function-calling to produce structured intent and parameters.
  • Generating constrained summaries where an outline ensures topic coverage and a JSON block supplies metadata.
  • Automating ETL steps by having the model emit typed rows validated by Pydantic before insertion.

FAQ

When should I choose function-calling over JSON mode?

Use function-calling when the provider supports it and you need the model to trigger deterministic tool actions; use JSON mode when you control prompt/validation and need a strict, portable schema contract.

How do I handle occasional schema violations from the model?

Treat violations as recoverable: validate, attempt a single structured retry with clearer constraints, then fall back to partial parsing or human review if retries fail.