home / skills / microck / ordinary-claude-skills / gemini

gemini skill

/skills_all/gemini

This skill executes Gemini CLI to perform AI-powered code analysis and generation, enabling advanced reasoning and faster code insights.

This is most likely a fork of the gemini skill from cexll
npx playbooks add skill microck/ordinary-claude-skills --skill gemini

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
3.1 KB
---
name: gemini
description: Execute Gemini CLI for AI-powered code analysis and generation. Use when you need to leverage Google's Gemini models for complex reasoning tasks.
---

# Gemini CLI Integration

## Overview

Execute Gemini CLI commands with support for multiple models and flexible prompt input. Integrates Google's Gemini AI models into Claude Code workflows.

## When to Use

- Complex reasoning tasks requiring advanced AI capabilities
- Code generation and analysis with Gemini models
- Tasks requiring Google's latest AI technology
- Alternative perspective on code problems

## Usage
**Mandatory**: Run via uv with fixed timeout 7200000ms (foreground):
```bash
uv run ~/.claude/skills/gemini/scripts/gemini.py "<prompt>" [working_dir]
```

**Optional** (direct execution or using Python):
```bash
~/.claude/skills/gemini/scripts/gemini.py "<prompt>" [working_dir]
# or
python3 ~/.claude/skills/gemini/scripts/gemini.py "<prompt>" [working_dir]
```

## Environment Variables

- **GEMINI_MODEL**: Configure model (default: `gemini-3-pro-preview`)
  - Example: `export GEMINI_MODEL=gemini-3`

## Timeout Control

- **Fixed**: 7200000 milliseconds (2 hours), immutable
- **Bash tool**: Always set `timeout: 7200000` for double protection

### Parameters

- `prompt` (required): Task prompt or question
- `working_dir` (optional): Working directory (default: current directory)

### Return Format

Plain text output from Gemini:

```text
Model response text here...
```

Error format (stderr):

```text
ERROR: Error message
```

### Invocation Pattern

When calling via Bash tool, always include the timeout parameter:

```yaml
Bash tool parameters:
- command: uv run ~/.claude/skills/gemini/scripts/gemini.py "<prompt>"
- timeout: 7200000
- description: <brief description of the task>
```

Alternatives:

```yaml
# Direct execution (simplest)
- command: ~/.claude/skills/gemini/scripts/gemini.py "<prompt>"

# Using python3
- command: python3 ~/.claude/skills/gemini/scripts/gemini.py "<prompt>"
```

### Examples

**Basic query:**

```bash
uv run ~/.claude/skills/gemini/scripts/gemini.py "explain quantum computing"
# timeout: 7200000
```

**Code analysis:**

```bash
uv run ~/.claude/skills/gemini/scripts/gemini.py "review this code for security issues: $(cat app.py)"
# timeout: 7200000
```

**With specific working directory:**

```bash
uv run ~/.claude/skills/gemini/scripts/gemini.py "analyze project structure" "/path/to/project"
# timeout: 7200000
```

**Using python3 directly (alternative):**

```bash
python3 ~/.claude/skills/gemini/scripts/gemini.py "your prompt here"
```

## Notes

- **Recommended**: Use `uv run` for automatic Python environment management (requires uv installed)
- **Alternative**: Direct execution `./gemini.py` (uses system Python via shebang)
- Python implementation using standard library (zero dependencies)
- Cross-platform compatible (Windows/macOS/Linux)
- PEP 723 compliant (inline script metadata)
- Requires Gemini CLI installed and authenticated
- Supports all Gemini model variants (configure via `GEMINI_MODEL` environment variable)
- Output is streamed directly from Gemini CLI

Overview

This skill runs the Gemini CLI to perform AI-driven code analysis and generation using Google’s Gemini models. It integrates into command workflows and supports configurable model selection and a fixed 2-hour execution window. Use it when you want Gemini-powered reasoning or an alternative AI perspective on code tasks.

How this skill works

The skill invokes a Python wrapper around the Gemini CLI, streaming plain-text model responses to stdout and errors to stderr. It accepts a required prompt and an optional working directory, reads GEMINI_MODEL for model selection, and enforces a fixed 7200000 ms (2 hour) timeout for all executions. Recommended invocation uses the provided command runner to ensure consistent environment handling.

When to use it

  • Complex reasoning tasks that benefit from Gemini’s advanced models
  • Automated code generation, refactoring, or synthesis requests
  • Security reviews and static analysis of code snippets or projects
  • Project- or repo-level structure analysis using an optional working directory
  • When you need an alternative model perspective alongside other AI tools

Best practices

  • Run through the recommended runner to ensure the correct Python environment and consistent behavior
  • Set GEMINI_MODEL in your environment to pick the desired Gemini variant before running
  • Keep prompts focused and include relevant code or file context via the working directory where appropriate
  • Always include the 7200000 ms timeout when invoking from automation tools to avoid hung processes
  • Inspect stderr for explicit error messages and treat stdout as the canonical model response

Example use cases

  • Explain a complex algorithm in plain language: provide the algorithm prompt and get a step-by-step explanation
  • Security audit: feed source files or snippets and request vulnerability identification and fixes
  • Code review: ask for improvements, edge cases, or performance suggestions for a given file
  • Project analysis: run within a repository path to get structural recommendations and refactor ideas
  • Generate unit tests or example usage based on provided functions and expected behavior

FAQ

How do I change which Gemini model is used?

Set the GEMINI_MODEL environment variable to the desired model name before running the command.

Can I run longer than two hours?

No. The execution enforces a fixed 7200000 ms (2 hour) timeout for safety and consistency.