home / skills / georgekhananaev / claude-skills-vault / gemini-cli
This skill lets you run Gemini CLI queries, compare responses with Claude, and delegate tasks to Gemini for faster AI-assisted workflows.
npx playbooks add skill georgekhananaev/claude-skills-vault --skill gemini-cliReview the files below or copy the command above to add this skill to your agents.
---
name: gemini-cli
description: Run Gemini CLI for AI queries. Use when user asks to "run/ask/use gemini", compare Claude vs Gemini, or delegate tasks to Gemini.
---
# Gemini CLI
Interact w/ Google's Gemini CLI locally. Run queries, get responses, compare outputs.
## Prerequisites
Gemini CLI must be installed & configured:
1. **Install:** https://github.com/google-gemini/gemini-cli
2. **Auth:** Run `gemini` & sign in w/ Google account
3. **Verify:** `gemini --version`
## When to Use
- User asks to "run/ask/use gemini"
- Compare Claude vs Gemini responses
- Get second AI opinion
- Delegate task to Gemini
## Usage
```bash
# One-shot query
gemini "Your prompt"
# Specific model
gemini -m gemini-3-pro-preview "prompt"
# JSON output
gemini -o json "prompt"
# YOLO mode (auto-approve)
gemini -y "prompt"
# File analysis
cat file.txt | gemini "Analyze this"
```
## Comparison Workflow
1. Provide Claude's response first
2. Run same query via Gemini CLI
3. Present both for comparison
## CLI Options
| Flag | Desc |
|------|------|
| `-m` | Model (gemini-3-pro) |
| `-o` | Output: text/json/stream-json |
| `-y` | Auto-approve (YOLO) |
| `-d` | Debug mode |
| `-s` | Sandbox mode |
| `-r` | Resume session |
| `-i` | Interactive after prompt |
## Best Practices
- Quote prompts w/ double quotes
- Use `-o json` for parsing
- Pipe files for context
- Specify model for specific capabilitiesThis skill runs the Google Gemini CLI locally to send prompts, retrieve structured outputs, and compare Gemini responses with other models. It streamlines one-shot queries, model-specific calls, JSON output parsing, and file-based analyses so you can delegate tasks to Gemini from a developer workflow. Use it to get a second AI opinion or to evaluate differences between Claude and Gemini responses.
The skill invokes the installed Gemini CLI with chosen flags and collects the response. It supports model selection, JSON output for programmatic parsing, streaming modes, and auto-approve (YOLO) execution. For comparisons, provide Claude's output first, then run the same prompt through Gemini and present both results side-by-side.
What do I need before using this skill?
Install the Gemini CLI, sign in with your Google account, and verify with gemini --version.
How do I get machine-readable output?
Use the -o json flag to return JSON that your scripts and parsers can consume.