home / skills / openclaw / openclaw / openai-whisper-api
npx playbooks add skill openclaw/openclaw --skill openai-whisper-apiReview the files below or copy the command above to add this skill to your agents.
---
name: openai-whisper-api
description: Transcribe audio via OpenAI Audio Transcriptions API (Whisper).
homepage: https://platform.openai.com/docs/guides/speech-to-text
metadata:
{
"openclaw":
{
"emoji": "☁️",
"requires": { "bins": ["curl"], "env": ["OPENAI_API_KEY"] },
"primaryEnv": "OPENAI_API_KEY",
},
}
---
# OpenAI Whisper API (curl)
Transcribe an audio file via OpenAI’s `/v1/audio/transcriptions` endpoint.
## Quick start
```bash
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a
```
Defaults:
- Model: `whisper-1`
- Output: `<input>.txt`
## Useful flags
```bash
{baseDir}/scripts/transcribe.sh /path/to/audio.ogg --model whisper-1 --out /tmp/transcript.txt
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a --language en
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a --prompt "Speaker names: Peter, Daniel"
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a --json --out /tmp/transcript.json
```
## API key
Set `OPENAI_API_KEY`, or configure it in `~/.openclaw/openclaw.json`:
```json5
{
skills: {
"openai-whisper-api": {
apiKey: "OPENAI_KEY_HERE",
},
},
}
```