home / skills / openclaw / skills / discord-voice

discord-voice skill

/skills/avatarneil/discord-voice

This skill enables real-time voice in Discord by transcribing, processing, and speaking back with Claude AI for seamless conversations.

npx playbooks add skill openclaw/skills --skill discord-voice

Review the files below or copy the command above to add this skill to your agents.

Files (23)
SKILL.md
8.8 KB
---
name: discord-voice
description: Real-time voice conversations in Discord voice channels with Claude AI
metadata:
  clawdbot:
    config:
      requiredConfig:
        - discord.token
      optionalEnv:
        - OPENAI_API_KEY
        - ELEVENLABS_API_KEY
        - DEEPGRAM_API_KEY
      systemDependencies:
        - ffmpeg
        - build-essential
      example: |
        {
          "plugins": {
            "entries": {
              "discord-voice": {
                "enabled": true,
                "config": {
                  "sttProvider": "local-whisper",
                  "ttsProvider": "openai",
                  "ttsVoice": "nova",
                  "vadSensitivity": "medium",
                  "streamingSTT": true,
                  "bargeIn": true,
                  "allowedUsers": []
                }
              }
            }
          }
        }
---

# Discord Voice Plugin for Clawdbot

Real-time voice conversations in Discord voice channels. Join a voice channel, speak, and have your words transcribed, processed by Claude, and spoken back.

## Features

- **Join/Leave Voice Channels**: Via slash commands, CLI, or agent tool
- **Voice Activity Detection (VAD)**: Automatically detects when users are speaking
- **Speech-to-Text**: Whisper API (OpenAI), Deepgram, or Local Whisper (Offline)
- **Streaming STT**: Real-time transcription with Deepgram WebSocket (~1s latency reduction)
- **Agent Integration**: Transcribed speech is routed through the Clawdbot agent
- **Text-to-Speech**: OpenAI TTS, ElevenLabs, or Kokoro (Local/Offline)
- **Audio Playback**: Responses are spoken back in the voice channel
- **Barge-in Support**: Stops speaking immediately when user starts talking
- **Auto-reconnect**: Automatic heartbeat monitoring and reconnection on disconnect

## Requirements

- Discord bot with voice permissions (Connect, Speak, Use Voice Activity)
- API keys for STT and TTS providers
- System dependencies for voice:
  - `ffmpeg` (audio processing)
  - Native build tools for `@discordjs/opus` and `sodium-native`

## Installation

### 1. Install System Dependencies

```bash
# Ubuntu/Debian
sudo apt-get install ffmpeg build-essential python3

# Fedora/RHEL
sudo dnf install ffmpeg gcc-c++ make python3

# macOS
brew install ffmpeg
```

### 2. Install via ClawdHub

```bash
clawdhub install discord-voice
```

Or manually:

```bash
cd ~/.clawdbot/extensions
git clone <repository-url> discord-voice
cd discord-voice
npm install
```

### 3. Configure in clawdbot.json

```json5
{
  plugins: {
    entries: {
      "discord-voice": {
        enabled: true,
        config: {
          sttProvider: "local-whisper",
          ttsProvider: "openai",
          ttsVoice: "nova",
          vadSensitivity: "medium",
          allowedUsers: [], // Empty = allow all users
          silenceThresholdMs: 1500,
          maxRecordingMs: 30000,
          openai: {
            apiKey: "sk-...", // Or use OPENAI_API_KEY env var
          },
        },
      },
    },
  },
}
```

### 4. Discord Bot Setup

Ensure your Discord bot has these permissions:

- **Connect** - Join voice channels
- **Speak** - Play audio
- **Use Voice Activity** - Detect when users speak

Add these to your bot's OAuth2 URL or configure in Discord Developer Portal.

## Configuration

| Option                | Type     | Default           | Description                                     |
| --------------------- | -------- | ----------------- | ----------------------------------------------- |
| `enabled`             | boolean  | `true`            | Enable/disable the plugin                       |
| `sttProvider`         | string   | `"local-whisper"` | `"whisper"`, `"deepgram"`, or `"local-whisper"` |
| `streamingSTT`        | boolean  | `true`            | Use streaming STT (Deepgram only, ~1s faster)   |
| `ttsProvider`         | string   | `"openai"`        | `"openai"` or `"elevenlabs"`                    |
| `ttsVoice`            | string   | `"nova"`          | Voice ID for TTS                                |
| `vadSensitivity`      | string   | `"medium"`        | `"low"`, `"medium"`, or `"high"`                |
| `bargeIn`             | boolean  | `true`            | Stop speaking when user talks                   |
| `allowedUsers`        | string[] | `[]`              | User IDs allowed (empty = all)                  |
| `silenceThresholdMs`  | number   | `1500`            | Silence before processing (ms)                  |
| `maxRecordingMs`      | number   | `30000`           | Max recording length (ms)                       |
| `heartbeatIntervalMs` | number   | `30000`           | Connection health check interval                |
| `autoJoinChannel`     | string   | `undefined`       | Channel ID to auto-join on startup              |

### Provider Configuration

#### OpenAI (Whisper + TTS)

```json5
{
  openai: {
    apiKey: "sk-...",
    whisperModel: "whisper-1",
    ttsModel: "tts-1",
  },
}
```

#### ElevenLabs (TTS only)

```json5
{
  elevenlabs: {
    apiKey: "...",
    voiceId: "21m00Tcm4TlvDq8ikWAM", // Rachel
    modelId: "eleven_multilingual_v2",
  },
}
```

#### Deepgram (STT only)

```json5
{
  deepgram: {
    apiKey: "...",
    model: "nova-2",
  },
}
```

## Usage

### Slash Commands (Discord)

Once registered with Discord, use these commands:

- `/discord_voice join <channel>` - Join a voice channel
- `/discord_voice leave` - Leave the current voice channel
- `/discord_voice status` - Show voice connection status

### CLI Commands

```bash
# Join a voice channel
clawdbot discord_voice join <channelId>

# Leave voice
clawdbot discord_voice leave --guild <guildId>

# Check status
clawdbot discord_voice status
```

### Agent Tool

The agent can use the `discord_voice` tool:

```
Join voice channel 1234567890
```

The tool supports actions:

- `join` - Join a voice channel (requires channelId)
- `leave` - Leave voice channel
- `speak` - Speak text in the voice channel
- `status` - Get current voice status

## How It Works

1. **Join**: Bot joins the specified voice channel
2. **Listen**: VAD detects when users start/stop speaking
3. **Record**: Audio is buffered while user speaks
4. **Transcribe**: On silence, audio is sent to STT provider
5. **Process**: Transcribed text is sent to Clawdbot agent
6. **Synthesize**: Agent response is converted to audio via TTS
7. **Play**: Audio is played back in the voice channel

## Streaming STT (Deepgram)

When using Deepgram as your STT provider, streaming mode is enabled by default. This provides:

- **~1 second faster** end-to-end latency
- **Real-time feedback** with interim transcription results
- **Automatic keep-alive** to prevent connection timeouts
- **Fallback** to batch transcription if streaming fails

To use streaming STT:

```json5
{
  sttProvider: "deepgram",
  streamingSTT: true, // default
  deepgram: {
    apiKey: "...",
    model: "nova-2",
  },
}
```

## Barge-in Support

When enabled (default), the bot will immediately stop speaking if a user starts talking. This creates a more natural conversational flow where you can interrupt the bot.

To disable (let the bot finish speaking):

```json5
{
  bargeIn: false,
}
```

## Auto-reconnect

The plugin includes automatic connection health monitoring:

- **Heartbeat checks** every 30 seconds (configurable)
- **Auto-reconnect** on disconnect with exponential backoff
- **Max 3 attempts** before giving up

If the connection drops, you'll see logs like:

```
[discord-voice] Disconnected from voice channel
[discord-voice] Reconnection attempt 1/3
[discord-voice] Reconnected successfully
```

## VAD Sensitivity

- **low**: Picks up quiet speech, may trigger on background noise
- **medium**: Balanced (recommended)
- **high**: Requires louder, clearer speech

## Troubleshooting

### "Discord client not available"

Ensure the Discord channel is configured and the bot is connected before using voice.

### Opus/Sodium build errors

Install build tools:

```bash
npm install -g node-gyp
npm rebuild @discordjs/opus sodium-native
```

### No audio heard

1. Check bot has Connect + Speak permissions
2. Check bot isn't server muted
3. Verify TTS API key is valid

### Transcription not working

1. Check STT API key is valid
2. Check audio is being recorded (see debug logs)
3. Try adjusting VAD sensitivity

### Enable debug logging

```bash
DEBUG=discord-voice clawdbot gateway start
```

## Environment Variables

| Variable             | Description                    |
| -------------------- | ------------------------------ |
| `DISCORD_TOKEN`      | Discord bot token (required)   |
| `OPENAI_API_KEY`     | OpenAI API key (Whisper + TTS) |
| `ELEVENLABS_API_KEY` | ElevenLabs API key             |
| `DEEPGRAM_API_KEY`   | Deepgram API key               |

## Limitations

- Only one voice channel per guild at a time
- Maximum recording length: 30 seconds (configurable)
- Requires stable network for real-time audio
- TTS output may have slight delay due to synthesis

## License

MIT

Overview

This skill enables real-time voice conversations inside Discord voice channels by routing live speech through the Clawdbot agent. Join a channel, speak, and the plugin transcribes, processes with Claude-compatible logic, and replies with synthesized audio played back into the channel. It supports multiple STT and TTS providers, VAD, barge-in, and automatic reconnection for reliable sessions.

How this skill works

The bot joins a specified voice channel and uses voice activity detection (VAD) to detect when participants speak. Recorded audio is sent to the configured STT provider (Whisper, Deepgram, or local Whisper) and the resulting transcript is passed to the Clawdbot agent for intent and response generation. The agent response is converted to audio using the chosen TTS provider and played back; barge-in will stop playback immediately if a user starts talking.

When to use it

  • Run live agent-driven conversations with users in Discord voice channels.
  • Prototype voice-enabled assistants or game master bots that need real-time replies.
  • Provide accessible voice interfaces for community servers.
  • Record and transcribe short spoken interactions for moderation or logging.
  • Demo conversational AI with minimal setup using available STT/TTS providers.

Best practices

  • Prefer Deepgram streaming STT for lower latency when responsiveness matters.
  • Set vadSensitivity to 'medium' and tune silenceThresholdMs to avoid premature cuts.
  • Limit maxRecordingMs to reasonable values (default 30s) to keep processing fast.
  • Provide only required API keys via environment variables to avoid leaking secrets.
  • Ensure bot has Connect, Speak, and Use Voice Activity permissions in Discord.

Example use cases

  • Customer support agent in a community server that answers product questions in voice.
  • Interactive game master bot that listens to players and narrates dynamic responses.
  • Live language practice sessions using transcription and AI feedback.
  • Accessibility assistant that transcribes voice input and reads back summaries.
  • Conference room-style Q&A where participants interrupt and the bot adapts with barge-in.

FAQ

Which STT and TTS providers are supported?

Supported STT: OpenAI Whisper, Deepgram, and local Whisper. Supported TTS: OpenAI TTS, ElevenLabs, and local Kokoro options.

How do I reduce reply latency?

Use Deepgram streamingSTT for ~1s lower latency, lower silenceThresholdMs, and keep TTS model choices optimized for speed.