home / skills / steipete / agent-scripts / oracle

oracle skill

/skills/oracle

This skill bundles your prompt and files into a one-shot review session, providing targeted advisory feedback from a second-model analysis.

This is most likely a fork of the oracle skill from openclaw
npx playbooks add skill steipete/agent-scripts --skill oracle

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
5.7 KB
---
name: oracle
description: Use the @steipete/oracle CLI to bundle a prompt plus the right files and get a second-model review (API or browser) for debugging, refactors, design checks, or cross-validation.
---

# Oracle (CLI) — best use

Oracle bundles your prompt + selected files into one “one-shot” request so another model can answer with real repo context (API or browser automation). Treat outputs as advisory: verify against the codebase + tests.

## Main use case (browser, GPT‑5.2 Pro)

Default workflow here: `--engine browser` with GPT‑5.2 Pro in ChatGPT. This is the “human in the loop” path: it can take ~10 minutes to ~1 hour; expect a stored session you can reattach to.

Recommended defaults:
- Engine: browser (`--engine browser`)
- Model: GPT‑5.2 Pro (either `--model gpt-5.2-pro` or a ChatGPT picker label like `--model "5.2 Pro"`)
- Attachments: directories/globs + excludes; avoid secrets.

## Golden path (fast + reliable)

1. Pick a tight file set (fewest files that still contain the truth).
2. Preview what you’re about to send (`--dry-run` + `--files-report` when needed).
3. Run in browser mode for the usual GPT‑5.2 Pro ChatGPT workflow; use API only when you explicitly want it.
4. If the run detaches/timeouts: reattach to the stored session (don’t re-run).

## Commands (preferred)

- Show help (once/session):
  - `npx -y @steipete/oracle --help`

- Preview (no tokens):
  - `npx -y @steipete/oracle --dry-run summary -p "<task>" --file "src/**" --file "!**/*.test.*"`
  - `npx -y @steipete/oracle --dry-run full -p "<task>" --file "src/**"`

- Token/cost sanity:
  - `npx -y @steipete/oracle --dry-run summary --files-report -p "<task>" --file "src/**"`

- Browser run (main path; long-running is normal):
  - `npx -y @steipete/oracle --engine browser --model gpt-5.2-pro -p "<task>" --file "src/**"`

- Manual paste fallback (assemble bundle, copy to clipboard):
  - `npx -y @steipete/oracle --render --copy -p "<task>" --file "src/**"`
  - Note: `--copy` is a hidden alias for `--copy-markdown`.

## Attaching files (`--file`)

`--file` accepts files, directories, and globs. You can pass it multiple times; entries can be comma-separated.

- Include:
  - `--file "src/**"` (directory glob)
  - `--file src/index.ts` (literal file)
  - `--file docs --file README.md` (literal directory + file)

- Exclude (prefix with `!`):
  - `--file "src/**" --file "!src/**/*.test.ts" --file "!**/*.snap"`

- Defaults (important behavior from the implementation):
  - Default-ignored dirs: `node_modules`, `dist`, `coverage`, `.git`, `.turbo`, `.next`, `build`, `tmp` (skipped unless you explicitly pass them as literal dirs/files).
  - Honors `.gitignore` when expanding globs.
  - Does not follow symlinks (glob expansion uses `followSymbolicLinks: false`).
  - Dotfiles are filtered unless you explicitly opt in with a pattern that includes a dot-segment (e.g. `--file ".github/**"`).
  - Hard cap: files > 1 MB are rejected (split files or narrow the match).

## Budget + observability

- Target: keep total input under ~196k tokens.
- Use `--files-report` (and/or `--dry-run json`) to spot the token hogs before spending.
- If you need hidden/advanced knobs: `npx -y @steipete/oracle --help --verbose`.

## Engines (API vs browser)

- Auto-pick: uses `api` when `OPENAI_API_KEY` is set, otherwise `browser`.
- Browser engine supports GPT + Gemini only; use `--engine api` for Claude/Grok/Codex or multi-model runs.
- **API runs require explicit user consent** before starting because they incur usage costs.
- Browser attachments:
  - `--browser-attachments auto|never|always` (auto pastes inline up to ~60k chars then uploads).
- Remote browser host (signed-in machine runs automation):
  - Host: `oracle serve --host 0.0.0.0 --port 9473 --token <secret>`
  - Client: `oracle --engine browser --remote-host <host:port> --remote-token <secret> -p "<task>" --file "src/**"`

## Sessions + slugs (don’t lose work)

- Stored under `~/.oracle/sessions` (override with `ORACLE_HOME_DIR`).
- Runs may detach or take a long time (browser + GPT‑5.2 Pro often does). If the CLI times out: don’t re-run; reattach.
  - List: `oracle status --hours 72`
  - Attach: `oracle session <id> --render`
- Use `--slug "<3-5 words>"` to keep session IDs readable.
- Duplicate prompt guard exists; use `--force` only when you truly want a fresh run.

## Prompt template (high signal)

Oracle starts with **zero** project knowledge. Assume the model cannot infer your stack, build tooling, conventions, or “obvious” paths. Include:
- Project briefing (stack + build/test commands + platform constraints).
- “Where things live” (key directories, entrypoints, config files, dependency boundaries).
- Exact question + what you tried + the error text (verbatim).
- Constraints (“don’t change X”, “must keep public API”, “perf budget”, etc).
- Desired output (“return patch plan + tests”, “list risky assumptions”, “give 3 options with tradeoffs”).

### “Exhaustive prompt” pattern (for later restoration)

When you know this will be a long investigation, write a prompt that can stand alone later:
- Top: 6–30 sentence project briefing + current goal.
- Middle: concrete repro steps + exact errors + what you already tried.
- Bottom: attach *all* context files needed so a fresh model can fully understand (entrypoints, configs, key modules, docs).

If you need to reproduce the same context later, re-run with the same prompt + `--file …` set (Oracle runs are one-shot; the model doesn’t remember prior runs).

## Safety

- Don’t attach secrets by default (`.env`, key files, auth tokens). Redact aggressively; share only what’s required.
- Prefer “just enough context”: fewer files + better prompt beats whole-repo dumps.

Overview

This skill wraps a prompt and a selected set of repository files into a single “one-shot” bundle and sends it to a secondary model for review, debugging, refactors, or design validation. It supports either browser-driven ChatGPT sessions (recommended for GPT‑5.2 Pro) or API-based runs, and emphasizes safe, minimal context and reproducible sessions. Treat outputs as advisory and verify changes against code and tests.

How this skill works

You specify files, globs, and a clear prompt; the CLI packages the prompt plus the selected files into one request and submits it to a chosen engine (browser or API). The browser mode integrates with ChatGPT sessions that may detach and be reattached later; the API mode runs directly but requires explicit consent for usage. The tool previews token and file impact and enforces defaults like ignoring node_modules, respecting .gitignore, and rejecting files >1MB.

When to use it

  • Debugging with real repo context when stack traces or code snippets are insufficient
  • Design reviews or refactor proposals that need local files attached
  • Cross-validation between models or a second-opinion on tricky changes
  • When you want a reproducible, stored session you can reattach to later
  • When you need a quick token/cost sanity check before running a long job

Best practices

  • Send the smallest file set that still contains the truth—tight globs beat whole-repo dumps
  • Preview with --dry-run and --files-report to spot big files and token hogs
  • Prefer browser + GPT‑5.2 Pro for human-in-the-loop sessions; use API only when you need it
  • Include a short project briefing, exact repro steps, error text, constraints, and desired output in the prompt
  • Avoid attaching secrets; redact .env and keys and honor the default ignored dirs

Example use cases

  • Ask GPT‑5.2 Pro to propose a patch plan and unit tests for a failing CI test by attaching the relevant modules and configs
  • Run a cross-model validation: bundle core files and compare recommendations between models via API runs
  • Use --dry-run summary to inspect token estimates before launching a costly browser session
  • Assemble a reproducible investigation prompt (project briefing + files) so future runs can be reattached and audited

FAQ

How do I avoid sending secrets?

Exclude .env and key files explicitly and prefer narrow globs; Oracle filters dotfiles unless you opt in and skips default ignored dirs.

When should I use browser vs API engine?

Use browser for GPT‑5.2 Pro human-in-the-loop sessions (long-running, reattachable). Use API when you need other models or automated multi-model runs, and be aware API runs require explicit consent.

What if a run detaches or times out?

Do not re-run. Reattach to the stored session under ~/.oracle/sessions (or ORACLE_HOME_DIR) using oracle session <id> --render; you can list recent sessions with oracle status --hours 72.