home / skills / hoangnguyen0403 / agent-skills-standard / quality-assurance

This skill enforces QA best practices by generating granular test cases with single conditions per screen and clear naming conventions.

npx playbooks add skill hoangnguyen0403/agent-skills-standard --skill quality-assurance

Review the files below or copy the command above to add this skill to your agents.

Files (3)
SKILL.md
1.2 KB
---
name: Quality Assurance
description: Standards for creating high-quality, granular manual test cases and QA processes.
metadata:
  labels: [qa, testing, best-practices]
  triggers:
    files: ['**/*.feature', '**/*.test.ts', '**/test_plan.md']
    keywords: [test case, qa, bug report, testing standard]
---

# Quality Assurance Standards

## **Priority: P1 (HIGH)**

## 1. Test Case Granularity

- **1 Test Case = 1 Condition on 1 Screen**.
  - **Split Screens**: "Order Details" & "Item Details" are separate.
  - **Split Conditions**: "Config A" & "Config B" are separate.
- **No "OR" Logic**: Each TC must test a single, distinct path.

## 2. Naming Convention

- **Pattern**: `([Platform]) [Module]_[Action] on [Screen] when [Condition]`
- **Rule**: Only include `[Platform]` if requirement is exclusive to one platform (e.g., `[Mobile]`). Omit if it supports **Both**.
- **Example**: `Order_Verify payment term on Item Details when Toggle is OFF` (Supports Both)

## 3. Priority Levels

- **High**: Critical path, blocker bug.
- **Normal**: Standard validation, edge case.
- **Low**: Cosmetic, minor improvement.

## 4. References

- [Detailed Examples](references/test_case_standards.md)

Overview

This skill defines standards for creating high-quality, granular manual test cases and QA processes. It enforces one-condition-per-test-case granularity, a clear naming convention, and priority levels to streamline review and execution. The goal is predictable, traceable test artifacts that scale across platforms and teams.

How this skill works

The skill inspects manual test case authoring and QA workflows to ensure each test covers a single condition on one screen, prohibits OR logic, and applies a consistent naming pattern. It validates whether platform-specific tags are used only when exclusive and that priorities map to clear business impact. It also surfaces mismatches and suggests remediations to align test assets with the standard.

When to use it

  • Authoring new manual test cases for UI or integration flows
  • Reviewing or refactoring existing test case repositories
  • Onboarding QA engineers to a team’s manual test process
  • Preparing regression suites for release validation
  • Auditing test coverage and traceability across platforms

Best practices

  • Write one test case per single condition on one screen; split screens and configurations into separate tests
  • Avoid OR logic; each test must validate a single, distinct path
  • Use naming pattern: ([Platform]) [Module]_[Action] on [Screen] when [Condition]; omit platform if it applies to all
  • Tag priority clearly: High for critical/blocker, Normal for standard/edge, Low for cosmetic improvements
  • Keep descriptions concise and include steps, expected result, and any setup/teardown prerequisites

Example use cases

  • Create a regression suite for an e-commerce app with separate TCs for Order Details and Item Details screens
  • Refactor a large test repository by splitting combined tests that used OR logic into discrete cases
  • Onboard a new QA hire by providing test case templates and naming examples for mobile and web
  • Run a pre-release audit to ensure all critical paths have High-priority test coverage
  • Map manual test cases to acceptance criteria in a cross-platform feature rollout

FAQ

When should I include the [Platform] tag in the name?

Include [Platform] only when the requirement or behavior is specific to one platform; omit it when the same test applies to both web and mobile.

What if a flow truly requires multiple conditions?

Split the flow into multiple test cases, each verifying a single condition. Use a separate orchestration or end-to-end test to validate combined flows if needed.