home / skills / openclaw / openclaw / sherpa-onnx-tts
This skill enables offline local text-to-speech using sherpa-onnx, delivering audio generation without cloud reliance for your personal AI assistant.
npx playbooks add skill openclaw/openclaw --skill sherpa-onnx-ttsReview the files below or copy the command above to add this skill to your agents.
---
name: sherpa-onnx-tts
description: Local text-to-speech via sherpa-onnx (offline, no cloud)
metadata:
{
"openclaw":
{
"emoji": "🗣️",
"os": ["darwin", "linux", "win32"],
"requires": { "env": ["SHERPA_ONNX_RUNTIME_DIR", "SHERPA_ONNX_MODEL_DIR"] },
"install":
[
{
"id": "download-runtime-macos",
"kind": "download",
"os": ["darwin"],
"url": "https://github.com/k2-fsa/sherpa-onnx/releases/download/v1.12.23/sherpa-onnx-v1.12.23-osx-universal2-shared.tar.bz2",
"archive": "tar.bz2",
"extract": true,
"stripComponents": 1,
"targetDir": "runtime",
"label": "Download sherpa-onnx runtime (macOS)",
},
{
"id": "download-runtime-linux-x64",
"kind": "download",
"os": ["linux"],
"url": "https://github.com/k2-fsa/sherpa-onnx/releases/download/v1.12.23/sherpa-onnx-v1.12.23-linux-x64-shared.tar.bz2",
"archive": "tar.bz2",
"extract": true,
"stripComponents": 1,
"targetDir": "runtime",
"label": "Download sherpa-onnx runtime (Linux x64)",
},
{
"id": "download-runtime-win-x64",
"kind": "download",
"os": ["win32"],
"url": "https://github.com/k2-fsa/sherpa-onnx/releases/download/v1.12.23/sherpa-onnx-v1.12.23-win-x64-shared.tar.bz2",
"archive": "tar.bz2",
"extract": true,
"stripComponents": 1,
"targetDir": "runtime",
"label": "Download sherpa-onnx runtime (Windows x64)",
},
{
"id": "download-model-lessac",
"kind": "download",
"url": "https://github.com/k2-fsa/sherpa-onnx/releases/download/tts-models/vits-piper-en_US-lessac-high.tar.bz2",
"archive": "tar.bz2",
"extract": true,
"targetDir": "models",
"label": "Download Piper en_US lessac (high)",
},
],
},
}
---
# sherpa-onnx-tts
Local TTS using the sherpa-onnx offline CLI.
## Install
1. Download the runtime for your OS (extracts into `~/.openclaw/tools/sherpa-onnx-tts/runtime`)
2. Download a voice model (extracts into `~/.openclaw/tools/sherpa-onnx-tts/models`)
Update `~/.openclaw/openclaw.json`:
```json5
{
skills: {
entries: {
"sherpa-onnx-tts": {
env: {
SHERPA_ONNX_RUNTIME_DIR: "~/.openclaw/tools/sherpa-onnx-tts/runtime",
SHERPA_ONNX_MODEL_DIR: "~/.openclaw/tools/sherpa-onnx-tts/models/vits-piper-en_US-lessac-high",
},
},
},
},
}
```
The wrapper lives in this skill folder. Run it directly, or add the wrapper to PATH:
```bash
export PATH="{baseDir}/bin:$PATH"
```
## Usage
```bash
{baseDir}/bin/sherpa-onnx-tts -o ./tts.wav "Hello from local TTS."
```
Notes:
- Pick a different model from the sherpa-onnx `tts-models` release if you want another voice.
- If the model dir has multiple `.onnx` files, set `SHERPA_ONNX_MODEL_FILE` or pass `--model-file`.
- You can also pass `--tokens-file` or `--data-dir` to override the defaults.
- Windows: run `node {baseDir}\\bin\\sherpa-onnx-tts -o tts.wav "Hello from local TTS."`
This skill provides local text-to-speech using the sherpa-onnx offline runtime so you can synthesize voice without sending data to the cloud. It wraps the sherpa-onnx CLI into a convenient, cross-platform command that works on any OS and integrates with the OpenClaw environment. The focus is on privacy, offline use, and straightforward integration into local workflows.
The wrapper calls the sherpa-onnx TTS runtime using environment variables that point to the runtime and voice model directories. It accepts text and output file arguments and passes options like model file, tokens file, and data directory through to the sherpa-onnx CLI. You can run the script directly or add its bin directory to your PATH for global use.
How do I choose a different voice model?
Download a different model from the sherpa-onnx tts-models release into your models directory, then set SHERPA_ONNX_MODEL_DIR to that model path or pass --model-file to the wrapper.
What if the model directory has multiple .onnx files?
Either set SHERPA_ONNX_MODEL_FILE to the specific filename or use the wrapper flag --model-file to point to the correct .onnx file.