home / skills / openclaw / skills / venice-ai-media

venice-ai-media skill

/skills/nhannah/venice-ai-media

This skill generates, upscales, edits, and assembles videos from Venice AI, helping you create media quickly with customizable options.

npx playbooks add skill openclaw/skills --skill venice-ai-media

Review the files below or copy the command above to add this skill to your agents.

Files (7)
SKILL.md
8.3 KB
---
name: venice-ai-media
description: Generate, edit, and upscale images; create videos from images or other videos via Venice AI. Supports text-to-image, image-to-video (Sora, WAN), video-to-video (Runway Gen4), upscaling, and AI editing.
homepage: https://venice.ai
metadata:
  {
    "clawdbot":
      {
        "emoji": "🎨",
        "requires": { "bins": ["python3"], "env": ["VENICE_API_KEY"] },
        "primaryEnv": "VENICE_API_KEY",
        "notes": "Requires Python 3.10+",
        "install":
          [
            {
              "id": "python-brew",
              "kind": "brew",
              "formula": "python",
              "bins": ["python3"],
              "label": "Install Python (brew)",
            },
          ],
      },
  }
---

# Venice AI Media

Generate images and videos using Venice AI APIs. Venice is an uncensored AI platform with competitive pricing.

## Prerequisites

- **Python 3.10+** (`brew install python` or system Python)
- **Venice API key** (free tier available)
- **requests** library (auto-installed by scripts if missing)

## Setup

### 1. Get Your API Key

1. Create account at [venice.ai](https://venice.ai)
2. Go to [venice.ai/settings/api](https://venice.ai/settings/api)
3. Click "Create API Key"
4. Copy the key (starts with `vn_...`)

### 2. Configure the Key

**Option A: Environment variable**

```bash
export VENICE_API_KEY="vn_your_key_here"
```

**Option B: Clawdbot config** (recommended - persists across sessions)

Add to `~/.clawdbot/clawdbot.json`:

```json5
{
  skills: {
    entries: {
      "venice-ai-media": {
        env: {
          VENICE_API_KEY: "vn_your_key_here",
        },
      },
    },
  },
}
```

### 3. Verify Setup

```bash
python3 {baseDir}/scripts/venice-image.py --list-models
```

If you see a list of models, you're ready!

## Pricing Overview

| Feature          | Cost                              |
| ---------------- | --------------------------------- |
| Image generation | ~$0.01-0.03 per image             |
| Image upscale    | ~$0.02-0.04                       |
| Image edit       | $0.04                             |
| Video (WAN)      | ~$0.10-0.50 depending on duration |
| Video (Sora)     | ~$0.50-2.00 depending on duration |
| Video (Runway)   | ~$0.20-1.00                       |

Use `--quote` with video commands to check pricing before generation.

## Quick Start

```bash
# Generate an image
python3 {baseDir}/scripts/venice-image.py --prompt "a serene canal in Venice at sunset"

# Upscale an image
python3 {baseDir}/scripts/venice-upscale.py photo.jpg --scale 2

# Edit an image with AI
python3 {baseDir}/scripts/venice-edit.py photo.jpg --prompt "add sunglasses"

# Create a video from an image
python3 {baseDir}/scripts/venice-video.py --image photo.jpg --prompt "gentle camera pan" --duration 5s
```

---

## Image Generation

```bash
python3 {baseDir}/scripts/venice-image.py --prompt "a serene canal in Venice at sunset"
python3 {baseDir}/scripts/venice-image.py --prompt "cyberpunk city" --count 4
python3 {baseDir}/scripts/venice-image.py --prompt "portrait" --width 768 --height 1024
python3 {baseDir}/scripts/venice-image.py --prompt "abstract art" --out-dir /tmp/venice
python3 {baseDir}/scripts/venice-image.py --list-models
python3 {baseDir}/scripts/venice-image.py --list-styles
python3 {baseDir}/scripts/venice-image.py --prompt "fantasy" --model flux-2-pro --no-validate
python3 {baseDir}/scripts/venice-image.py --prompt "photo" --style-preset "Cinematic" --embed-exif
```

**Key flags:** `--prompt`, `--model` (default: flux-2-max), `--count` (uses efficient batch API for same prompt), `--width`, `--height`, `--format` (webp/png/jpeg), `--resolution` (1K/2K/4K), `--aspect-ratio`, `--negative-prompt`, `--style-preset` (use `--list-styles` to see options), `--cfg-scale` (prompt adherence 0-20, default 7.5), `--seed` (for reproducible results), `--safe-mode` (disabled by default for uncensored output), `--hide-watermark` (only use if explicitly requested - watermark supports Venice), `--embed-exif` (embed prompt in image metadata), `--lora-strength` (0-100 for applicable models), `--steps` (inference steps, model-dependent), `--enable-web-search`, `--no-validate` (skip model check for new/beta models)

## Image Upscale

```bash
python3 {baseDir}/scripts/venice-upscale.py photo.jpg --scale 2
python3 {baseDir}/scripts/venice-upscale.py photo.jpg --scale 4 --enhance
python3 {baseDir}/scripts/venice-upscale.py photo.jpg --enhance --enhance-prompt "sharpen details"
python3 {baseDir}/scripts/venice-upscale.py --url "https://example.com/image.jpg" --scale 2
```

**Key flags:** `--scale` (1-4, default: 2), `--enhance` (AI enhancement), `--enhance-prompt`, `--enhance-creativity` (0.0-1.0), `--replication` (0.0-1.0, preserves lines/noise, default: 0.35), `--url` (use URL instead of local file), `--output`, `--out-dir`

## Image Edit

```bash
python3 {baseDir}/scripts/venice-edit.py photo.jpg --prompt "add sunglasses"
python3 {baseDir}/scripts/venice-edit.py photo.jpg --prompt "change the sky to sunset"
python3 {baseDir}/scripts/venice-edit.py photo.jpg --prompt "remove the person in background"
python3 {baseDir}/scripts/venice-edit.py --url "https://example.com/image.jpg" --prompt "colorize"
```

**Key flags:** `--prompt` (required - AI interprets what to modify), `--url` (use URL instead of local file), `--output`, `--out-dir`

**Note:** The edit endpoint uses the Qwen-Image model which has some content restrictions (unlike other Venice endpoints).

## Video Generation

```bash
# Get price quote first (no generation)
python3 {baseDir}/scripts/venice-video.py --quote --model wan-2.6-image-to-video --duration 10s --resolution 720p

# Image-to-video (WAN 2.6 - default)
python3 {baseDir}/scripts/venice-video.py --image photo.jpg --prompt "camera pans slowly" --duration 10s

# Image-to-video (Sora)
python3 {baseDir}/scripts/venice-video.py --image photo.jpg --prompt "cinematic" \
  --model sora-2-image-to-video --duration 8s --aspect-ratio 16:9 --skip-audio-param

# Video-to-video (Runway Gen4)
python3 {baseDir}/scripts/venice-video.py --video input.mp4 --prompt "anime style" \
  --model runway-gen4-turbo-v2v

# List models (shows available durations per model)
python3 {baseDir}/scripts/venice-video.py --list-models

# Clean up a video downloaded with --no-delete
python3 {baseDir}/scripts/venice-video.py --complete <queue_id> --model <model>
```

**Key flags:** `--image` or `--video` (required for generation), `--prompt` (required for generation), `--model` (default: wan-2.6-image-to-video), `--duration` (model-dependent, see --list-models), `--resolution` (480p/720p/1080p), `--aspect-ratio`, `--audio`/`--no-audio`, `--skip-audio-param`, `--quote` (price estimate), `--timeout`, `--poll-interval`, `--no-delete` (keep server media), `--complete` (cleanup previously downloaded video), `--no-validate` (skip model check)

**Progress:** During generation, the script shows estimated progress based on Venice's average execution time.

## Model Notes

Use `--list-models` to see current availability and status. Models change frequently.

**Image:** Default is `flux-2-max`. Common options include flux, gpt-image, and nano-banana variants.

**Video:**

- **WAN** models: Image-to-video, configurable audio, various durations (5s-21s)
- **Sora** models: Requires `--aspect-ratio`, use `--skip-audio-param`
- **Runway** models: Video-to-video transformation

**Tips:**

- Use `--no-validate` for new or beta models not yet in the model list
- Use `--quote` for video to check pricing before generation
- Safe mode is disabled by default (Venice is an uncensored API)

## Output

Scripts print a `MEDIA: /path/to/file` line for Clawdbot auto-attach.

**Tip:** Use `--out-dir /tmp/venice-$(date +%s)` when generating media to send via iMessage (ensures accessibility across user accounts).

## Troubleshooting

**"VENICE_API_KEY not set"**

- Check your config in `~/.clawdbot/clawdbot.json`
- Or export the env var: `export VENICE_API_KEY="vn_..."`

**"Invalid API key"**

- Verify your key at [venice.ai/settings/api](https://venice.ai/settings/api)
- Keys start with `vn_`

**"Model not found"**

- Run `--list-models` to see available models
- Use `--no-validate` for new/beta models

**Video stuck/timeout**

- Videos can take 1-5 minutes depending on model and duration
- Use `--timeout 600` for longer videos
- Check Venice status at [venice.ai](https://venice.ai)

**"requests" module not found**

- Install it: `pip3 install requests`

Overview

This skill integrates with Venice AI to generate, edit, upscale images and create videos from images or other videos. It supports text-to-image, image-to-video (WAN, Sora), video-to-video (Runway Gen4), upscaling, and AI image editing with a simple CLI. Pricing and model options are exposed so you can preview costs and choose duration, resolution, and style presets.

How this skill works

The scripts call Venice AI endpoints using your VENICE_API_KEY to submit generation, edit, and upscale jobs. Commands let you list models/styles, request price quotes for videos, and download completed media. Outputs include local file paths and a MEDIA line for easy integration with messaging or automation tools.

When to use it

  • Generate concept art, illustrations, or photoreal images from text prompts.
  • Create short animated videos from a single image with camera moves or style transforms.
  • Transform existing video footage into different visual styles using video-to-video models.
  • Enhance or upscale low-resolution images for print or presentation.
  • Quickly iterate on image edits like object removal, color changes, or accessories.

Best practices

  • Set VENICE_API_KEY via environment variable or persistent Clawdbot config to avoid repeated prompts.
  • Use --quote before creating videos to estimate cost for duration and resolution.
  • Start with default models (flux-2-max for images, wan-2.6 for image-to-video) and use --list-models to explore options.
  • Include --seed for reproducible image outputs and --cfg-scale to control prompt adherence.
  • Use --out-dir with a timestamp when generating media you plan to share across accounts or services.

Example use cases

  • Produce a set of marketing visuals: generate multiple images with --count and a consistent prompt.
  • Create a 10s promo clip: convert a hero image into a cinematic pan using --duration and --model sora-2-image-to-video.
  • Restore and upscale archival photos for print using --scale 4 and --enhance.
  • Turn a travel vlog into an anime-style video using video-to-video (Runway Gen4) with a style prompt.
  • Rapidly prototype product mockups by editing photos (add/remove elements) with AI edit commands.

FAQ

How do I set my API key?

Export VENICE_API_KEY or add it to your Clawdbot config under the skill entry for persistent use.

Can I estimate video cost before generating?

Yes — run the video command with --quote and the chosen model, duration, and resolution to get a cost estimate.

What if a model is missing from the list?

Use --no-validate for new or beta models, and run --list-models frequently as availability changes.