home / skills / omer-metin / skills-for-antigravity / content-creation

content-creation skill

/skills/content-creation

This skill helps you master AI tools to generate unlimited content at scale, including images, video, voiceovers, and compelling copy.

npx playbooks add skill omer-metin/skills-for-antigravity --skill content-creation

Review the files below or copy the command above to add this skill to your agents.

Files (4)
SKILL.md
1.3 KB
---
name: content-creation-ai-tools
description: Master the AI tools that generate unlimited content at scale. From stunning images to professional videos, voiceovers to compelling copy - create content that used to require entire teams. Use when "need images, create video, generate content, voiceover, marketing visuals, social media content, ad creative, content, images, video, audio, writing, marketing, creative" mentioned. 
---

# Content Creation Ai Tools

## Identity



## Reference System Usage

You must ground your responses in the provided reference files, treating them as the source of truth for this domain:

* **For Creation:** Always consult **`references/patterns.md`**. This file dictates *how* things should be built. Ignore generic approaches if a specific pattern exists here.
* **For Diagnosis:** Always consult **`references/sharp_edges.md`**. This file lists the critical failures and "why" they happen. Use it to explain risks to the user.
* **For Review:** Always consult **`references/validations.md`**. This contains the strict rules and constraints. Use it to validate user inputs objectively.

**Note:** If a user's request conflicts with the guidance in these files, politely correct them using the information provided in the references.

Overview

This skill helps you master AI tools that generate unlimited content at scale, from images and video to voiceovers and copy. It packages best-practice patterns, failure checks, and validation rules so teams can produce professional creative assets faster and safer. Use it to streamline content pipelines and reduce manual handoffs.

How this skill works

The skill guides creation by following the canonical patterns in references/patterns.md so outputs match proven formats and workflows. It diagnoses risk and common failure modes using references/sharp_edges.md and flags likely problems before they appear. It validates inputs and final assets against strict rules in references/validations.md to ensure outputs meet constraints and compliance needs.

When to use it

  • You need images, video, audio, or copy produced quickly and consistently at scale.
  • You want to automate social media, ad creative, or marketing asset production.
  • You must check generated content for common failure modes or ethical risks.
  • You need to validate assets against formatting, length, or brand constraints.
  • You are building pipelines that combine multiple AI generation models.

Best practices

  • Follow the patterns in references/patterns.md as the authoritative templates for prompts, composition, and sequencing.
  • Run the diagnosis checks from references/sharp_edges.md early to catch hallucinations, bias, or quality drops.
  • Validate every asset with references/validations.md rules before publishing or forwarding to human reviewers.
  • Iterate prompts with small, measurable changes and keep changelogs for reproducibility.
  • Chain model calls with clear input/output contracts to avoid cascading errors.

Example use cases

  • Generate 100 localized banner images with consistent brand layout and automated quality checks.
  • Produce short product videos: script → storyboard → synthetic voiceover → render, validated against duration and aspect rules.
  • Create weekly social copy variations and automatically filter for policy or profanity issues.
  • Build ad creative A/B sets where each variant follows the same validation and diagnosis pipeline.
  • Automate podcast episode snippets: transcript → highlight extraction → voice synthesis → publishing-ready audio.

FAQ

What if a generated asset violates validation rules?

The skill flags the issue and returns actionable fixes based on validations.md; re-run generation with corrected inputs or apply constrained templates from patterns.md.

How do I handle hallucinations or biased outputs?

Use the diagnosis checks in sharp_edges.md to identify likely causes, then constrain prompts, add grounding data, or apply human review gates.