home / skills / openai / openai-agents-js / changeset-validation

changeset-validation skill

/.codex/skills/changeset-validation

This skill validates changesets using LLM judgments against diffs, ensuring correct bump level and repository rules.

npx playbooks add skill openai/openai-agents-js --skill changeset-validation

Review the files below or copy the command above to add this skill to your agents.

Files (6)
SKILL.md
1.9 KB
---
name: changeset-validation
description: Validate changesets in openai-agents-js using LLM judgment against git diffs (including uncommitted local changes). Use when packages/ or .changeset/ are modified, or when verifying PR changeset compliance and bump level.
---

# Changeset Validation

## Overview

This skill validates whether changesets correctly reflect package changes and follow the repository rules. It relies on the shared prompt in `references/validation-prompt.md` so local Codex reviews and GitHub Actions share the same logic.
Experimental or preview-only feature additions that are explicitly labeled as such in the diff may remain a patch bump when they do not change existing behavior.
Major bumps are only allowed after the first major release; before that, do not use major bumps for feature-level changes.

## Quick start

Local (Codex-driven):

1. Run:
   ```bash
   pnpm changeset:validate-prompt
   ```
2. Apply the rules from `references/validation-prompt.md` to the generated prompt.
3. Respond with a JSON verdict containing ok/errors/warnings/required_bump (English-only strings).

CI (Codex Action):

1. Run:
   ```bash
   pnpm changeset:validate-prompt -- --ci --output .github/codex/prompts/changeset-validation.generated.md
   ```
2. Use `openai/codex-action` with the generated prompt and JSON schema to get a structured verdict.

## Workflow

1. Generate the prompt context via `pnpm changeset:validate-prompt`.
2. Apply the rules in `references/validation-prompt.md` to judge correctness.
3. Provide a clear verdict and required bump (patch/minor/major/none).
4. If the changeset needs edits, update it and re-run the validation.

## Shared source of truth

- Keep the prompt file as the single source of validation rules.
- Keep the script lightweight: it should only gather context and emit the prompt.

## Resources

- `references/validation-prompt.md`

Overview

This skill validates changesets in the openai-agents-js monorepo by using an LLM to judge git diffs, including uncommitted local changes. It ensures changeset files under .changeset/ and package modifications under packages/ follow repository rules and produce the correct version bump. Use it locally or in CI to generate a structured JSON verdict with ok/errors/warnings/required_bump. The validation logic is driven by a single shared prompt so local and CI reviews are consistent.

How this skill works

The tool gathers context from the current git diff (committed and uncommitted) and the repository tree, then generates a prompt that encodes the validation rules from the shared reference prompt. An LLM evaluates whether each changeset accurately describes the code changes and recommends a required bump level (patch/minor/major/none). The output is a JSON verdict containing ok, errors, warnings, and required_bump suitable for human review or automatic gating in CI.

When to use it

  • Before opening or updating a pull request that touches packages/ or .changeset/.
  • Locally, to check uncommitted edits and confirm the correct bump before committing.
  • In CI to enforce consistent changeset quality and bump rules across contributors.
  • When confirming whether experimental or preview-labeled changes can be patch bumps.
  • To verify that a PR's changeset complies with repository bump policies before release.

Best practices

  • Keep the shared prompt file as the single source of truth for validation rules.
  • Run validation locally prior to pushing changes to reduce CI failures.
  • Include explicit experimental/preview labels in diffs when intending a smaller bump.
  • Treat the LLM verdict as authoritative but review errors/warnings manually when needed.
  • Use the generated JSON verdict to gate CI checks or to provide clear reviewer guidance.

Example use cases

  • Run locally (pnpm changeset:validate-prompt) to check uncommitted changesets and get an immediate JSON verdict.
  • Integrate into GitHub Actions to validate PRs automatically and reject incorrect bump levels.
  • Audit a proposed minor feature to ensure it doesn't mistakenly request a major bump.
  • Validate that documentation-only edits do not trigger package version bumps.
  • Confirm multiple package changes are reflected correctly across their corresponding changesets.

FAQ

What does the LLM output look like?

It returns a JSON object with ok (boolean), errors (list), warnings (list), and required_bump (patch/minor/major/none).

Can I validate uncommitted local changes?

Yes—the tool reads the working tree and includes uncommitted diffs when generating the prompt for the LLM.