home / skills / cnemri / google-genai-skills / speech-build
This skill enables commercial-grade speech synthesis and transcription using Gemini-TTS and Chirp 3, supporting multi-speaker voices and diarization.
npx playbooks add skill cnemri/google-genai-skills --skill speech-buildReview the files below or copy the command above to add this skill to your agents.
---
name: speech-build
description: Generate and transcribe speech using Google's Gemini-TTS and Chirp 3 models. Supports Text-to-Speech (Single/Multi-speaker), Instant Custom Voice, and Speech-to-Text (Transcription/Diarization).
---
# Speech Skill (TTS & STT)
Use this skill to implement audio generation and transcription workflows using the `google-genai` and `google-cloud-speech` SDKs.
## Quick Start Setup
```python
from google import genai
from google.genai import types
# For STT: from google.cloud import speech_v2
client = genai.Client()
```
## Reference Materials
- **[Text-to-Speech (TTS)](references/tts.md)**: Gemini-TTS, Chirp 3 HD, Instant Custom Voice.
- **[Speech-to-Text (STT)](references/stt.md)**: Chirp 3 Transcription, Diarization, Streaming.
- **[Voices & Locales](references/voices.md)**: Available voices (`Aoede`, `Puck`...) and languages.
- **[Prompting Guide](references/prompting.md)**: How to control style, accent, and pacing in Gemini-TTS.
- **[Source Code](references/source_code.md)**: Deep inspection of SDK internals.
## Common Workflows
### 1. Generate Speech (Gemini-TTS)
```python
response = client.models.generate_content(
model="gemini-2.5-flash-preview-tts",
contents="Hello, world!",
config=types.GenerateContentConfig(
response_modalities=["AUDIO"],
speech_config=types.SpeechConfig(
voice_config=types.VoiceConfig(
prebuilt_voice_config=types.PrebuiltVoiceConfig(voice_name='Kore')
)
)
)
)
```
### 2. Transcribe Audio (Chirp 3)
```python
# Requires google-cloud-speech
from google.cloud import speech_v2
# ... (See stt.md for full setup)
response = speech_client.recognize(...)
```This skill provides audio generation and transcription using Google's Gemini-TTS and Chirp 3 models. It supports single- and multi-speaker Text-to-Speech, Instant Custom Voice creation, and Speech-to-Text with optional diarization. The implementation relies on the google-genai and google-cloud-speech SDKs for robust, production-ready workflows.
The skill calls Gemini-TTS to generate audio from text, allowing voice selection, style control, and multi-speaker mixes. For transcription, it uses Chirp 3 (via google-cloud-speech) to produce transcripts with timestamps and optional speaker diarization. Typical flows instantiate a genai.Client for TTS and a speech_v2 client for STT, then send configuration objects that control response modality, voice config, and diarization.
Which clients do I instantiate for TTS and STT?
Use genai.Client() for Gemini-TTS and a speech_v2 client from google.cloud.speech_v2 for Chirp 3 transcription.
Can I create a custom voice?
Yes. The Instant Custom Voice option lets you generate branded voices; follow the voice creation and prompt guidelines to control style.
How do I improve diarization accuracy?
Provide clear channel-separated audio when possible, segment long files, and tune diarization settings in the speech client configuration.