home / skills / openclaw / skills / oracle

oracle skill

/skills/steipete/oracle

This skill helps you validate and debug code by bundling prompts with files and obtaining a second-model review for design checks and cross-validation.

npx playbooks add skill openclaw/skills --skill oracle

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
5.7 KB
---
name: oracle
description: Use the @steipete/oracle CLI to bundle a prompt plus the right files and get a second-model review (API or browser) for debugging, refactors, design checks, or cross-validation.
---

# Oracle (CLI) — best use

Oracle bundles your prompt + selected files into one “one-shot” request so another model can answer with real repo context (API or browser automation). Treat outputs as advisory: verify against the codebase + tests.

## Main use case (browser, GPT‑5.2 Pro)

Default workflow here: `--engine browser` with GPT‑5.2 Pro in ChatGPT. This is the “human in the loop” path: it can take ~10 minutes to ~1 hour; expect a stored session you can reattach to.

Recommended defaults:
- Engine: browser (`--engine browser`)
- Model: GPT‑5.2 Pro (either `--model gpt-5.2-pro` or a ChatGPT picker label like `--model "5.2 Pro"`)
- Attachments: directories/globs + excludes; avoid secrets.

## Golden path (fast + reliable)

1. Pick a tight file set (fewest files that still contain the truth).
2. Preview what you’re about to send (`--dry-run` + `--files-report` when needed).
3. Run in browser mode for the usual GPT‑5.2 Pro ChatGPT workflow; use API only when you explicitly want it.
4. If the run detaches/timeouts: reattach to the stored session (don’t re-run).

## Commands (preferred)

- Show help (once/session):
  - `npx -y @steipete/oracle --help`

- Preview (no tokens):
  - `npx -y @steipete/oracle --dry-run summary -p "<task>" --file "src/**" --file "!**/*.test.*"`
  - `npx -y @steipete/oracle --dry-run full -p "<task>" --file "src/**"`

- Token/cost sanity:
  - `npx -y @steipete/oracle --dry-run summary --files-report -p "<task>" --file "src/**"`

- Browser run (main path; long-running is normal):
  - `npx -y @steipete/oracle --engine browser --model gpt-5.2-pro -p "<task>" --file "src/**"`

- Manual paste fallback (assemble bundle, copy to clipboard):
  - `npx -y @steipete/oracle --render --copy -p "<task>" --file "src/**"`
  - Note: `--copy` is a hidden alias for `--copy-markdown`.

## Attaching files (`--file`)

`--file` accepts files, directories, and globs. You can pass it multiple times; entries can be comma-separated.

- Include:
  - `--file "src/**"` (directory glob)
  - `--file src/index.ts` (literal file)
  - `--file docs --file README.md` (literal directory + file)

- Exclude (prefix with `!`):
  - `--file "src/**" --file "!src/**/*.test.ts" --file "!**/*.snap"`

- Defaults (important behavior from the implementation):
  - Default-ignored dirs: `node_modules`, `dist`, `coverage`, `.git`, `.turbo`, `.next`, `build`, `tmp` (skipped unless you explicitly pass them as literal dirs/files).
  - Honors `.gitignore` when expanding globs.
  - Does not follow symlinks (glob expansion uses `followSymbolicLinks: false`).
  - Dotfiles are filtered unless you explicitly opt in with a pattern that includes a dot-segment (e.g. `--file ".github/**"`).
  - Hard cap: files > 1 MB are rejected (split files or narrow the match).

## Budget + observability

- Target: keep total input under ~196k tokens.
- Use `--files-report` (and/or `--dry-run json`) to spot the token hogs before spending.
- If you need hidden/advanced knobs: `npx -y @steipete/oracle --help --verbose`.

## Engines (API vs browser)

- Auto-pick: uses `api` when `OPENAI_API_KEY` is set, otherwise `browser`.
- Browser engine supports GPT + Gemini only; use `--engine api` for Claude/Grok/Codex or multi-model runs.
- **API runs require explicit user consent** before starting because they incur usage costs.
- Browser attachments:
  - `--browser-attachments auto|never|always` (auto pastes inline up to ~60k chars then uploads).
- Remote browser host (signed-in machine runs automation):
  - Host: `oracle serve --host 0.0.0.0 --port 9473 --token <secret>`
  - Client: `oracle --engine browser --remote-host <host:port> --remote-token <secret> -p "<task>" --file "src/**"`

## Sessions + slugs (don’t lose work)

- Stored under `~/.oracle/sessions` (override with `ORACLE_HOME_DIR`).
- Runs may detach or take a long time (browser + GPT‑5.2 Pro often does). If the CLI times out: don’t re-run; reattach.
  - List: `oracle status --hours 72`
  - Attach: `oracle session <id> --render`
- Use `--slug "<3-5 words>"` to keep session IDs readable.
- Duplicate prompt guard exists; use `--force` only when you truly want a fresh run.

## Prompt template (high signal)

Oracle starts with **zero** project knowledge. Assume the model cannot infer your stack, build tooling, conventions, or “obvious” paths. Include:
- Project briefing (stack + build/test commands + platform constraints).
- “Where things live” (key directories, entrypoints, config files, dependency boundaries).
- Exact question + what you tried + the error text (verbatim).
- Constraints (“don’t change X”, “must keep public API”, “perf budget”, etc).
- Desired output (“return patch plan + tests”, “list risky assumptions”, “give 3 options with tradeoffs”).

### “Exhaustive prompt” pattern (for later restoration)

When you know this will be a long investigation, write a prompt that can stand alone later:
- Top: 6–30 sentence project briefing + current goal.
- Middle: concrete repro steps + exact errors + what you already tried.
- Bottom: attach *all* context files needed so a fresh model can fully understand (entrypoints, configs, key modules, docs).

If you need to reproduce the same context later, re-run with the same prompt + `--file …` set (Oracle runs are one-shot; the model doesn’t remember prior runs).

## Safety

- Don’t attach secrets by default (`.env`, key files, auth tokens). Redact aggressively; share only what’s required.
- Prefer “just enough context”: fewer files + better prompt beats whole-repo dumps.

Overview

This skill integrates the @steipete/oracle CLI to bundle a prompt with selected files and get a second-model review for debugging, refactors, design checks, or cross-validation. It streamlines sending a focused code snapshot to another model (API or browser) so you get context-aware, advisory feedback tied to real repo files. Treat outputs as guidance and verify changes with your tests and reviewers.

How this skill works

The skill collects the files you specify (files, dirs, and globs), applies excludes and default ignore rules, and builds a single one-shot request that includes your prompt plus the file bundle. It can run via browser automation (recommended for GPT-5.2 Pro) or via API, supports preview/dry-run modes to estimate tokens and cost, and stores long-running sessions you can reattach to instead of re-running. Attachments are managed with sensible defaults (no node_modules, no large files, honors .gitignore) and token budgets are surfaced with reports.

When to use it

  • Debugging a tricky runtime error that requires real repo context
  • Asking for a refactor plan that must respect public APIs and constraints
  • Design reviews where cross-file understanding is required
  • Cross-validating fixes or suggested patches from another model
  • Long investigations where you want a stored, reattachable session

Best practices

  • Send the smallest set of files that still contains the truth; avoid whole-repo dumps
  • Preview with --dry-run and --files-report to check token/cost before running
  • Use browser engine + GPT-5.2 Pro for interactive, human-in-the-loop runs; use API only when you accept usage costs
  • Include a concise project briefing, repro steps, exact error text, constraints, and desired output in the prompt
  • Avoid attaching secrets; redact or exclude sensitive files and keep inputs under the token target (~196k tokens)

Example use cases

  • Run a browser session to have GPT-5.2 Pro audit a failing test suite with relevant source files attached
  • Generate a safe refactor plan that preserves public APIs and includes a patch summary and tests
  • Use --dry-run summary to discover which files inflate token usage before committing to a paid API run
  • Attach a focused set of modules and ask for three alternative approaches with tradeoffs and estimated effort
  • Start a long investigation and reattach to the stored session if the run detaches or times out

FAQ

What engine should I pick by default?

Use --engine browser with GPT-5.2 Pro for the usual interactive workflow; the CLI auto-picks browser unless OPENAI_API_KEY is set.

How do I avoid token or cost surprises?

Run --dry-run with --files-report to see token estimates and file sizes; keep total input under the ~196k token target and exclude large files.