home / skills / tkersey / dotfiles / refine

refine skill

/codex/skills/refine

This skill refines an existing Codex skill using minimal diffs and quick validation to improve reliability and triggers.

npx playbooks add skill tkersey/dotfiles --skill refine

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
2.5 KB
---
name: refine
description: Refine an existing Codex skill via $ms with minimal diffs, then validate with quick_validate. Trigger when asked to improve a skill's trigger description/frontmatter, workflow text, metadata, scripts/references/assets, or agents/openai.yaml; also for requests to iterate, refactor, rename, or fix a skill using usage/session-mining evidence (for example from $seq).
---

# Refine

## Overview

Refine a target Codex skill by turning evidence into minimal, validated updates using $ms.

## Inputs

- Target skill name or path
- Improvement signals (user feedback, session mining notes, errors, missing steps)
- Constraints (minimal diff, required tooling, validation requirements)

## Example Prompts

- "Refine the docx skill to tighten triggers and regenerate agents/openai.yaml."
- "Add a small script to the pdf skill, then validate it."
- "Use session-mining notes to refine the gh skill's workflow."

## Workflow (Double Diamond)

### Discover

- Read the target skill's `SKILL.md`, `agents/openai.yaml` (if present), and any `scripts/`, `references/`, or `assets/`.
- Collect evidence from usage: confusion points, missing steps, bad triggers, or stale metadata.
- If no example prompts are provided, synthesize 2-3 realistic prompts that should trigger the skill.

### Define

- Write a one-line problem statement and 2-3 success criteria.
- Choose the smallest change set that addresses the evidence.
- Record explicit constraints (always run quick_validate, minimal diffs, required tooling).

### Develop

- List candidate updates: frontmatter description, workflow steps, new resources, or metadata regeneration.
- Prefer minimal-incision improvements; only add resources when they are repeatedly reused or required for determinism.

### Deliver

- Invoke $ms to implement changes in-place on the target skill.
- Keep SKILL.md frontmatter compliant for the target skill (name/description only unless a system skill allows more).
- Regenerate `agents/openai.yaml` if stale or missing.
- If adding scripts, run a representative sample to confirm behavior.

## Validation

Always run quick_validate on the target skill. Example command: `uv run --with pyyaml -- python3 codex/skills/.system/skill-creator/scripts/quick_validate.py codex/skills/<skill-name>`.

## Output Checklist

- Updated `SKILL.md` with accurate triggers and clear workflow
- Updated or regenerated `agents/openai.yaml` when needed
- New or modified resources (scripts/references/assets) if justified
- Validation signal from quick_validate (and script runs if added)

Overview

This skill refines an existing Codex skill by translating usage evidence into minimal, validated updates. It focuses on tightening triggers, correcting workflow text, and updating metadata, scripts, and runtime configuration. The goal is small, auditable diffs that fix real-world failures and improve signal reliability.

How this skill works

It inspects the target skill’s frontmatter, workflow text, referenced scripts, assets, and agent configuration, then synthesizes concise problem statements and small change sets. Changes are applied with a minimal-diff mindset and validated automatically using a quick validation script. If configuration or scripts are stale or missing, it regenerates or adds only the smallest required resources and runs representative checks.

When to use it

  • Improve or tighten trigger descriptions and frontmatter
  • Iterate or refactor a skill based on usage/session-mining evidence
  • Fix broken workflows, missing steps, or metadata errors
  • Add a tiny helper script required for deterministic behavior
  • Regenerate or repair agent runtime configuration files when stale

Best practices

  • Collect concrete evidence: user feedback, session logs, error traces, or failing examples before changing the skill
  • Define a one-line problem statement and 2–3 success criteria up front
  • Prefer the smallest change set that addresses the evidence; avoid large rewrites
  • Always run the quick validation step after changes and run any added scripts on a sample input
  • Document the intent and validation result in the change commit message

Example use cases

  • Tighten triggers for a document-processing skill after users report false positives
  • Use session-mining notes to remove a confusing workflow step and add a brief clarifying sentence
  • Add a small conversion script to a PDF skill, then run a sample conversion and validate
  • Regenerate the agent runtime config when prompts or model settings become inconsistent with current usage
  • Rename a workflow step and update references to avoid a mismatch between docs and code

FAQ

How small should a change be?

Make the minimal change that meets the defined success criteria; prefer targeted edits over broad refactors.

What validation is required?

Run the quick validation tool and any representative script runs for added resources; capture the validation signal in the result.