home / skills / pexoai / pexo-skills / videoagent-audio-studio

videoagent-audio-studio skill

/skills/videoagent-audio-studio

This skill provides one-command access to text-to-speech, music, sound effects, and voice cloning for seamless audio creation.

npx playbooks add skill pexoai/pexo-skills --skill videoagent-audio-studio

Review the files below or copy the command above to add this skill to your agents.

Files (10)
SKILL.md
6.7 KB
---
name: videoagent-audio-studio
version: 3.0.0
author: "wells"
emoji: "🎙️"
tags:
  - video
  - audio
  - tts
  - music
  - sfx
  - voice-clone
  - elevenlabs
  - fal
description: >
  Tired of juggling multiple audio APIs? This skill gives you one-command access to TTS, music generation, sound effects, and voice cloning. Use when you want to generate any audio without managing multiple API keys.
homepage: https://github.com/pexoai/audiomind-skill
metadata:
  openclaw:
    emoji: "🎙️"
    primaryEnv: ELEVENLABS_API_KEY
    requires:
      env:
        - ELEVENLABS_API_KEY
    install:
      - id: elevenlabs-mcp
        kind: npm
        package: "@elevenlabs/mcp"
        label: "Install ElevenLabs MCP server"
---

# 🎙️ VideoAgent Audio Studio

**Use when:** User asks to generate speech, narrate text, create a voice-over, compose music, or produce a sound effect.

VideoAgent Audio Studio is a smart audio dispatcher. It analyzes your request and routes it to the best available model — ElevenLabs for speech and music, fal.ai for fast SFX — and returns a ready-to-use audio URL.

---

## Quick Reference

| Request Type | Best Model | Latency |
|---|---|---|
| Narrate text / Voice-over | `elevenlabs-tts-v3` | ~3s |
| Low-latency TTS (real-time) | `elevenlabs-tts-turbo` | <1s |
| Background music | `cassetteai-music` | ~15s |
| Sound effect | `elevenlabs-sfx` | ~5s |
| Clone a voice from audio | `elevenlabs-voice-clone` | ~10s |

---

## How to Use

### 1. Start the AudioMind server (once per session)

```bash
bash {baseDir}/tools/start_server.sh
```

This starts the ElevenLabs MCP server on port 8124. The skill uses it for all audio generation.

### 2. Route the request

Analyze the user's request and call the appropriate tool via the MCP server:

**Text-to-Speech (TTS)**

When user asks to "narrate", "read aloud", "say", or "create a voice-over":

```
Use MCP tool: text_to_speech
  text: "<the text to narrate>"
  voice_id: "JBFqnCBsd6RMkjVDRZzb"   # Default: "George" (professional, neutral)
  model_id: "eleven_multilingual_v2"   # Use "eleven_turbo_v2_5" for low latency
```

**Music Generation**

When user asks to "compose", "create background music", or "make a soundtrack":

```
Use MCP tool: text_to_sound_effects  (via cassetteai-music on fal.ai)
  prompt: "<music description, e.g. 'upbeat lo-fi hip hop, 90 seconds'>"
  duration_seconds: <duration>
```

**Sound Effect (SFX)**

When user asks for a specific sound (e.g., "a door creaking", "rain on a window"):

```
Use MCP tool: text_to_sound_effects
  text: "<sound description>"
  duration_seconds: <1-22>
```

**Voice Cloning**

When user provides an audio sample and wants to clone the voice:

```
Use MCP tool: voice_add
  name: "<voice name>"
  files: ["<audio_file_url>"]
```

---

## Example Conversations

**User:** "Voice this text for me: Welcome to our product launch"

```
→ Route to: text_to_speech
  text: "Welcome to our product launch"
  voice_id: "JBFqnCBsd6RMkjVDRZzb"
  model_id: "eleven_multilingual_v2"
```

> 🎙️ Voiceover done! [Listen here](audio_url)

---

**User:** "Generate 60 seconds of relaxing background music for a podcast"

```
→ Route to: cassetteai-music (fal.ai)
  prompt: "relaxing lo-fi background music for a podcast, gentle piano and soft beats, 60 seconds"
  duration_seconds: 60
```

> 🎵 Background music ready! [Listen here](audio_url)

---

**User:** "Generate a sci-fi style door opening sound effect"

```
→ Route to: text_to_sound_effects
  text: "a futuristic sci-fi door sliding open with a hydraulic hiss"
  duration_seconds: 3
```

---

## Setup

### Required

Set `ELEVENLABS_API_KEY` in `~/.openclaw/openclaw.json`:

```json
{
  "skills": {
    "entries": {
      "videoagent-audio-studio": {
        "enabled": true,
        "env": {
          "ELEVENLABS_API_KEY": "your_elevenlabs_key_here"
        }
      }
    }
  }
}
```

Get your key at [elevenlabs.io/app/settings/api-keys](https://elevenlabs.io/app/settings/api-keys).

### Optional (for fal.ai music & SFX models)

```json
"FAL_KEY": "your_fal_key_here"
```

Get your key at [fal.ai/dashboard/keys](https://fal.ai/dashboard/keys).

---

## Self-Hosting the Proxy

The `cli.js` connects to a hosted proxy by default. If you want full control — or need to serve users in regions where `vercel.app` is blocked — you can deploy your own instance from the `proxy/` directory.

### Quick Deploy (Vercel)

```bash
cd proxy
npm install
vercel --prod
```

### Environment Variables

Set these in your Vercel project (Dashboard → Settings → Environment Variables):

| Variable | Required For | Where to Get |
|---|---|---|
| `ELEVENLABS_API_KEY` | TTS, SFX, Voice Clone | [elevenlabs.io/app/settings/api-keys](https://elevenlabs.io/app/settings/api-keys) |
| `FAL_KEY` | Music generation | [fal.ai/dashboard/keys](https://fal.ai/dashboard/keys) |
| `VALID_PRO_KEYS` | (Optional) Restrict access | Comma-separated list of allowed client keys |

### Point cli.js to Your Proxy

```bash
export AUDIOMIND_PROXY_URL="https://your-domain.com/api/audio"
```

Or set it in `~/.openclaw/openclaw.json`:

```json
{
  "skills": {
    "entries": {
      "videoagent-audio-studio": {
        "env": {
          "AUDIOMIND_PROXY_URL": "https://your-domain.com/api/audio"
        }
      }
    }
  }
}
```

### Custom Domain (Recommended)

If your users are in mainland China, bind a custom domain in Vercel Dashboard → Settings → Domains to avoid DNS issues with `vercel.app`.

---

## Model Reference

| Model ID | Type | Provider | Notes |
|---|---|---|---|
| `eleven_multilingual_v2` | TTS | ElevenLabs | Best quality, supports 29 languages |
| `eleven_turbo_v2_5` | TTS | ElevenLabs | Ultra-low latency, ideal for real-time |
| `eleven_monolingual_v1` | TTS | ElevenLabs | English only, fastest |
| `cassetteai-music` | Music | fal.ai | Reliable, fast music generation |
| `elevenlabs-sfx` | SFX | ElevenLabs | High-quality sound effects (up to 22s) |
| `elevenlabs-voice-clone` | Clone | ElevenLabs | Clone any voice from a short audio sample |

---

## Changelog

### v3.0.0
- **Simplified routing table**: Removed unstable/offline models from the main reference. The skill now only surfaces models that reliably work.
- **Clearer use-case triggers**: Added "Use when" section so the agent activates this skill at the right moment.
- **Unified setup**: Single `ELEVENLABS_API_KEY` is all you need to get started. `FAL_KEY` is now optional.
- **Removed polling complexity**: Music generation now uses `cassetteai-music` by default, which completes synchronously.

### v2.1.0
- Added async workflow for long-running music generation tasks.
- Added `cassetteai-music` as a stable alternative for music generation.

### v2.0.0
- Migrated to ElevenLabs MCP server architecture.
- Added voice cloning support.

### v1.0.0
- Initial release with TTS, music, and SFX routing.

Overview

This skill provides one-command access to TTS, music generation, sound effects, and voice cloning so you can produce any audio without juggling multiple API keys. It routes user requests to the best available model (ElevenLabs, fal.ai) and returns ready-to-use audio URLs. Use it to generate narrations, background music, SFX, or clone a voice from a sample quickly and reliably.

How this skill works

The skill inspects the user's intent and selects the optimal backend: ElevenLabs for high-quality TTS, ElevenLabs SFX and voice cloning, and fal.ai (cassetteai-music) for background music and fast SFX. It calls a local or hosted proxy MCP server which forwards requests to the appropriate model, waits for generation, and returns an audio URL. Configuration requires an ElevenLabs API key and optional fal.ai key for music.

When to use it

  • You need a narration or voice-over for video, podcast, or tutorial.
  • You want background music composed to a prompt and duration.
  • You need a short sound effect (door creak, whoosh, ambience).
  • You want to clone a voice from an audio sample for consistent narration.
  • You need low-latency TTS for near-real-time interactions.

Best practices

  • Provide concise prompts with style, mood, and duration for music (e.g., ‘60s lounge, mellow, 90 seconds’).
  • Specify voice_id and model_id when you need a particular tone or latency (use turbo model for low latency).
  • Keep SFX descriptions focused and set duration between 1–22 seconds for reliable results.
  • Supply a clear, noise-free audio sample for voice cloning to improve fidelity.
  • Host your own proxy if you need regional reliability or to control API keys.

Example use cases

  • Generate a 90‑second ambient music bed for a meditation app via cassetteai-music.
  • Produce a professional product launch voice-over using elevenlabs-tts-v3 and return a sharable audio URL.
  • Create a 3‑second sci‑fi door opening SFX with elevenlabs-sfx for a game asset.
  • Clone a presenter’s voice from a short clip and use it to narrate a multi-episode course.
  • Deliver sub-second TTS responses in interactive demos using the turbo low-latency model.

FAQ

What keys do I need to run the skill?

You must set ELEVENLABS_API_KEY; FAL_KEY is optional for music generation.

Can I self-host the proxy?

Yes. Deploy the proxy to your own hosting (Vercel or other) and point the skill to your proxy URL to control access and regional availability.