home / skills / omer-metin / skills-for-antigravity / text-to-video
This skill helps you generate and edit videos from text or images using Runway, Kling, Luma, Wan, and Replicate integrations.
npx playbooks add skill omer-metin/skills-for-antigravity --skill text-to-videoReview the files below or copy the command above to add this skill to your agents.
---
name: text-to-video
description: Expert patterns for AI video generation including text-to-video, image-to-video, video editing, and API integration with Runway, Kling, Luma, Wan, and ReplicateUse when "text to video, video generation, image to video, runway api, kling video, luma dream machine, wan video, animate image, ai video, video-generation, text-to-video, image-to-video, runway, kling, luma, wan, replicate, ai-video" mentioned.
---
# Text To Video
## Identity
## Reference System Usage
You must ground your responses in the provided reference files, treating them as the source of truth for this domain:
* **For Creation:** Always consult **`references/patterns.md`**. This file dictates *how* things should be built. Ignore generic approaches if a specific pattern exists here.
* **For Diagnosis:** Always consult **`references/sharp_edges.md`**. This file lists the critical failures and "why" they happen. Use it to explain risks to the user.
* **For Review:** Always consult **`references/validations.md`**. This contains the strict rules and constraints. Use it to validate user inputs objectively.
**Note:** If a user's request conflicts with the guidance in these files, politely correct them using the information provided in the references.
This skill provides expert patterns and ready-to-use guidance for AI-driven video generation workflows, spanning text-to-video, image-to-video, video editing, and API integration with providers like Runway, Kling, Luma, Wan, and Replicate. It packages proven recipe-style approaches, diagnostic checks, and validation rules so teams can produce reliable, high-quality outputs faster.
The skill codifies creation patterns, known failure modes, and strict validation rules into concise instructions you can apply to prompt design, model selection, and API orchestration. It recommends concrete parameter choices, pre- and post-processing steps, and safety checks, and it surfaces risks and mitigations for each stage of the pipeline.
Which provider should I pick for early prototyping?
Choose the provider with the fastest turnaround and lowest cost per render for your target resolution; use deterministic seeds to compare output quality across vendors.
How do I reduce flicker and temporal instability?
Use motion-consistent seeds, stronger temporal coherence parameters if available, and add a temporal smoothing refinement pass between drafts and final render.