home / skills / cexll / myclaude / gemini

gemini skill

/skills/gemini

This skill executes Gemini CLI for AI-powered code analysis and generation, enabling complex reasoning and high-quality code insights within Claude workflows.

npx playbooks add skill cexll/myclaude --skill gemini

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
3.1 KB
---
name: gemini
description: Execute Gemini CLI for AI-powered code analysis and generation. Use when you need to leverage Google's Gemini models for complex reasoning tasks.
---

# Gemini CLI Integration

## Overview

Execute Gemini CLI commands with support for multiple models and flexible prompt input. Integrates Google's Gemini AI models into Claude Code workflows.

## When to Use

- Complex reasoning tasks requiring advanced AI capabilities
- Code generation and analysis with Gemini models
- Tasks requiring Google's latest AI technology
- Alternative perspective on code problems

## Usage
**Mandatory**: Run via uv with fixed timeout 7200000ms (foreground):
```bash
uv run ~/.claude/skills/gemini/scripts/gemini.py "<prompt>" [working_dir]
```

**Optional** (direct execution or using Python):
```bash
~/.claude/skills/gemini/scripts/gemini.py "<prompt>" [working_dir]
# or
python3 ~/.claude/skills/gemini/scripts/gemini.py "<prompt>" [working_dir]
```

## Environment Variables

- **GEMINI_MODEL**: Configure model (default: `gemini-3-pro-preview`)
  - Example: `export GEMINI_MODEL=gemini-3`

## Timeout Control

- **Fixed**: 7200000 milliseconds (2 hours), immutable
- **Bash tool**: Always set `timeout: 7200000` for double protection

### Parameters

- `prompt` (required): Task prompt or question
- `working_dir` (optional): Working directory (default: current directory)

### Return Format

Plain text output from Gemini:

```text
Model response text here...
```

Error format (stderr):

```text
ERROR: Error message
```

### Invocation Pattern

When calling via Bash tool, always include the timeout parameter:

```yaml
Bash tool parameters:
- command: uv run ~/.claude/skills/gemini/scripts/gemini.py "<prompt>"
- timeout: 7200000
- description: <brief description of the task>
```

Alternatives:

```yaml
# Direct execution (simplest)
- command: ~/.claude/skills/gemini/scripts/gemini.py "<prompt>"

# Using python3
- command: python3 ~/.claude/skills/gemini/scripts/gemini.py "<prompt>"
```

### Examples

**Basic query:**

```bash
uv run ~/.claude/skills/gemini/scripts/gemini.py "explain quantum computing"
# timeout: 7200000
```

**Code analysis:**

```bash
uv run ~/.claude/skills/gemini/scripts/gemini.py "review this code for security issues: $(cat app.py)"
# timeout: 7200000
```

**With specific working directory:**

```bash
uv run ~/.claude/skills/gemini/scripts/gemini.py "analyze project structure" "/path/to/project"
# timeout: 7200000
```

**Using python3 directly (alternative):**

```bash
python3 ~/.claude/skills/gemini/scripts/gemini.py "your prompt here"
```

## Notes

- **Recommended**: Use `uv run` for automatic Python environment management (requires uv installed)
- **Alternative**: Direct execution `./gemini.py` (uses system Python via shebang)
- Python implementation using standard library (zero dependencies)
- Cross-platform compatible (Windows/macOS/Linux)
- PEP 723 compliant (inline script metadata)
- Requires Gemini CLI installed and authenticated
- Supports all Gemini model variants (configure via `GEMINI_MODEL` environment variable)
- Output is streamed directly from Gemini CLI

Overview

This skill executes the Gemini CLI to run Google's Gemini models for AI-powered code analysis and generation. It integrates into multi-agent workflows and provides a simple CLI wrapper with environment-driven model selection. The tool is optimized for long-running, complex reasoning tasks with a fixed, safe timeout.

How this skill works

The skill wraps a Python CLI that sends the provided prompt to the configured Gemini model and streams plain-text responses back to the caller. It accepts a required prompt and an optional working directory, and reads the model selection from the GEMINI_MODEL environment variable. Execution is recommended via the provided runner (uv run) which manages the Python environment and enforces the fixed timeout.

When to use it

  • When you need advanced reasoning for code generation or refactoring
  • For automated security or style reviews of codebases
  • When you want an alternative perspective alongside other models in multi-agent workflows
  • For long-running analytical tasks that benefit from Gemini’s capabilities
  • When you require streamed model output directly in the terminal or automation pipeline

Best practices

  • Always invoke via the uv runner to ensure the environment is managed and dependencies are isolated
  • Set GEMINI_MODEL when you need a specific Gemini variant; default is gemini-3-pro-preview
  • Include a concise but specific prompt; paste code or use file substitution for larger contexts
  • Provide working_dir when analysis depends on repository layout or relative imports
  • Honor the fixed 2-hour timeout in orchestration tools to avoid hanging jobs

Example use cases

  • Generate or refactor complex functions and classes using Gemini’s reasoning
  • Perform security and dependency analysis by piping source files into the prompt
  • Analyze project structure and produce actionable improvement plans for a repo directory
  • Run exploratory queries about algorithms, protocols, or architecture trade-offs
  • Compare Gemini output with other agents in a multi-model orchestration flow

FAQ

How do I change which Gemini model is used?

Set the GEMINI_MODEL environment variable before running the CLI (for example export GEMINI_MODEL=gemini-3). If unset, the skill defaults to gemini-3-pro-preview.

What timeout should I use in orchestration tools?

Always set timeout: 7200000 (milliseconds) or respect the fixed 2-hour limit; the CLI and recommended Bash tool pattern enforce this value for safety.