home / skills / georgekhananaev / claude-skills-vault / gemini-cli

gemini-cli skill

/.claude/skills/gemini-cli

This skill lets you run Gemini CLI queries, compare responses with Claude, and delegate tasks to Gemini for faster AI-assisted workflows.

npx playbooks add skill georgekhananaev/claude-skills-vault --skill gemini-cli

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
1.4 KB
---
name: gemini-cli
description: Run Gemini CLI for AI queries. Use when user asks to "run/ask/use gemini", compare Claude vs Gemini, or delegate tasks to Gemini.
---

# Gemini CLI

Interact w/ Google's Gemini CLI locally. Run queries, get responses, compare outputs.

## Prerequisites

Gemini CLI must be installed & configured:

1. **Install:** https://github.com/google-gemini/gemini-cli
2. **Auth:** Run `gemini` & sign in w/ Google account
3. **Verify:** `gemini --version`

## When to Use

- User asks to "run/ask/use gemini"
- Compare Claude vs Gemini responses
- Get second AI opinion
- Delegate task to Gemini

## Usage

```bash
# One-shot query
gemini "Your prompt"

# Specific model
gemini -m gemini-3-pro-preview "prompt"

# JSON output
gemini -o json "prompt"

# YOLO mode (auto-approve)
gemini -y "prompt"

# File analysis
cat file.txt | gemini "Analyze this"
```

## Comparison Workflow

1. Provide Claude's response first
2. Run same query via Gemini CLI
3. Present both for comparison

## CLI Options

| Flag | Desc |
|------|------|
| `-m` | Model (gemini-3-pro) |
| `-o` | Output: text/json/stream-json |
| `-y` | Auto-approve (YOLO) |
| `-d` | Debug mode |
| `-s` | Sandbox mode |
| `-r` | Resume session |
| `-i` | Interactive after prompt |

## Best Practices

- Quote prompts w/ double quotes
- Use `-o json` for parsing
- Pipe files for context
- Specify model for specific capabilities

Overview

This skill runs the Google Gemini CLI locally to send prompts, retrieve structured outputs, and compare Gemini responses with other models. It streamlines one-shot queries, model-specific calls, JSON output parsing, and file-based analyses so you can delegate tasks to Gemini from a developer workflow. Use it to get a second AI opinion or to evaluate differences between Claude and Gemini responses.

How this skill works

The skill invokes the installed Gemini CLI with chosen flags and collects the response. It supports model selection, JSON output for programmatic parsing, streaming modes, and auto-approve (YOLO) execution. For comparisons, provide Claude's output first, then run the same prompt through Gemini and present both results side-by-side.

When to use it

  • When a user asks to run, ask, or use Gemini for a query
  • To compare Claude vs Gemini outputs for correctness or style
  • When you need a second AI opinion or alternate phrasing
  • To delegate tasks that require local CLI interaction or file analysis
  • When you want machine-readable JSON responses for downstream tooling

Best practices

  • Ensure Gemini CLI is installed and authenticated before use
  • Quote prompts with double quotes to preserve whitespace and punctuation
  • Use -o json for reliable parsing and integration with tools
  • Specify -m to target a particular Gemini model for capability parity
  • Pipe files into the CLI for richer context instead of pasting large text

Example use cases

  • Run a security or architecture prompt, then compare Gemini and Claude recommendations
  • Ask Gemini to analyze a log or config file by piping the file into the CLI
  • Generate JSON-formatted linting or test suggestions for automated pipelines
  • Auto-approve a safe internal action with -y for fast iteration in prototyping
  • Switch models to evaluate hallucination rates or answer quality across versions

FAQ

What do I need before using this skill?

Install the Gemini CLI, sign in with your Google account, and verify with gemini --version.

How do I get machine-readable output?

Use the -o json flag to return JSON that your scripts and parsers can consume.