home / skills / gohypergiant / agent-skills / accelint-skill-manager

accelint-skill-manager skill

/skills/accelint-skill-manager

This skill helps you create, refactor, or audit AI skills by guiding packaging, naming, and integration steps for reliable agent capabilities.

npx playbooks add skill gohypergiant/agent-skills --skill accelint-skill-manager

Review the files below or copy the command above to add this skill to your agents.

Files (14)
SKILL.md
9.0 KB
# 1.2 SKILL.md

## Overview

General rule of thumb is to follow guidance from [Agent Skills](https://agentskills.io/). Since the overview and references table of contents is contained in the `AGENTS.md` file the content for this `AGENTS.md` file should be optimized towards adding any additional context, hints, or suggestions that help an agent more accurately determine if this skill is relevant. The persona and target audience for this document is an AI Agent or LLM.

Do not link to other skills' files directly. Use skill name references instead.

### Rich Description Field

**Purpose:** Some agents like Claude read the description field to decide which skills to load for a given task. Make it answer: "Should I read this skill right now?"

**Format:** Start with "Use when..." to focus on triggering conditions

**CRITICAL: Description = When to Use, NOT What the Skill Does**

The description should ONLY describe triggering conditions. Do NOT summarize the skill's process or workflow in the description.

**Why this matters:** Testing revealed that when a description summarizes the skill's workflow, an agent may follow the description instead of reading the full skill content. A description saying "code review between tasks" caused an agent to do ONE review, even though the skill's flowchart clearly showed TWO reviews (spec compliance then code quality).

When the description was changed to just "Use when executing implementation plans with independent tasks" (no workflow summary), the agent correctly read the flowchart and followed the two-stage review process.

**The trap:** Descriptions that summarize workflow create a shortcut an agent will take. The skill body becomes documentation that an agent skips.

**❌ Incorrect: summarizes workflow - agent may follow this instead of reading skill**
```
description: Use when executing plans - dispatches subagent per task with code review between tasks
```

**❌ Incorrect: too much process detail**
```
description: Use for TDD - write test first, watch it fail, write minimal code, refactor
```

**✅ Correct: just triggering conditions, no workflow summary**
```
description: Use when executing implementation plans with independent tasks in the current session
```

**✅ Correct: triggering conditions only**
```
description: Use when implementing any feature or bugfix, before writing implementation code
```

**Content:**
- Use concrete triggers, symptoms, and situations that signal this skill applies
- Describe the *problem* (race conditions, inconsistent behavior) not *language-specific symptoms* (setTimeout, sleep)
- Keep triggers technology-agnostic unless the skill itself is technology-specific
- If skill is technology-specific, make that explicit in the trigger
- Write in third person (injected into system prompt)
- **NEVER summarize the skill's process or workflow**

**❌ Incorrect: too abstract, vague, doesn't include when to use**
```
description: For async testing
```

**❌ Incorrect: first person**
```
description: I can help you with async tests when they're flaky
```

**❌ Incorrect: mentions technology but skill isn't specific to it**
```
description: Use when tests use setTimeout/sleep and are flaky
```

**✅ Correct: Starts with "Use when", describes problem, no workflow**
```
description: Use when tests have race conditions, timing dependencies, or pass/fail inconsistently
```

**✅ Correct: technology-specific skill with explicit trigger**
```
description: Use when using Next.js and handling authentication redirects
```

### Keyword Coverage

Use words an agent would search for:
- Error messages: "Hook timed out", "ENOTEMPTY", "race condition"
- Symptoms: "flaky", "hanging", "zombie", "pollution"
- Synonyms: "timeout/hang/freeze", "cleanup/teardown/afterEach"
- Tools: Actual commands, library names, file types

### Cross-Referencing Other Skills

**When writing documentation that references other skills:**

Use skill name only, with explicit requirement markers:
- ✅ Correct: `**REQUIRED SUB-SKILL:** Use ts-best-practices`
- ✅ Correct: `**REQUIRED BACKGROUND:** You MUST understand vitest-best-practices`
- ❌ Incorrect: `See skills/vitest-best-practices` (unclear if required)
- ❌ Incorrect: `@skills/react-best-practices/SKILL.md` (force-loads, burns context)

**Why no @ links:** `@` syntax force-loads files immediately, consuming context before you need them.

---

Reference: https://agentskills.io/specification#skill-md-format

Overview

This skill helps create, refactor, audit, and package skills that extend AI agents with specialized knowledge, workflows, or tool integrations. It is designed for authors who need a repeatable, agent-friendly format and clear triggering descriptions. Use it when adding or maintaining skills that agents will load or compose into workflows.

How this skill works

The skill inspects skill metadata, description fields, triggers, and supporting scripts to ensure they follow agent-friendly conventions and loading heuristics. It validates that descriptions are trigger-focused (start with "Use when"), technology specificity is explicit, and cross-skill references use the required skill-name format. It can scaffold shell-based packaging scripts and surface concrete fixes for description, keyword coverage, and cross-referencing issues.

When to use it

  • When creating a new skill for an AI agent ("create a skill", "make a new skill", "build a skill")
  • When packaging functionality as a reusable skill or preparing a skill for distribution
  • When refactoring or updating an existing skill to improve discoverability or correctness
  • When auditing skills to ensure they follow agent-loading and trigger conventions
  • When converting ad-hoc scripts or workflows into agent-loadable skills

Best practices

  • Always start the description with "Use when" and describe triggering conditions, not the workflow
  • Keep triggers concrete and problem-focused (symptoms, error messages, environments), technology-specific only when appropriate
  • Cover relevant keywords and error messages agents might search for to improve discoverability
  • Reference other skills by exact skill name with explicit requirement markers (e.g., REQUIRED SUB-SKILL: skill-name)
  • Provide shell packaging scripts or simple build steps when primary language is Shell to ease installation

Example use cases

  • Authoring a new skill that adds a CI test-generation helper and packaging it for agent consumption
  • Refactoring a skill whose description summarized workflow so agents incorrectly skipped detailed checks
  • Auditing a set of skills to ensure keyword coverage for common error messages like "timeout" or "ENOTEMPTY"
  • Converting a collection of shell helpers into a single agent-loadable skill with explicit triggers
  • Preparing a skill that integrates with third-party tools and ensuring cross-skill references are correctly marked

FAQ

Should descriptions ever summarize the workflow?

No. Descriptions must only describe triggering conditions; workflow summaries cause agents to skip the full skill content.

How should other skills be referenced?

Reference by skill name only and mark requirements explicitly, for example: REQUIRED SUB-SKILL: ts-best-practices.