home / skills / hoangnguyen0403 / agent-skills-standard / quality-assurance
This skill enforces QA best practices by generating granular test cases with single conditions per screen and clear naming conventions.
npx playbooks add skill hoangnguyen0403/agent-skills-standard --skill quality-assuranceReview the files below or copy the command above to add this skill to your agents.
---
name: Quality Assurance
description: Standards for creating high-quality, granular manual test cases and QA processes.
metadata:
labels: [qa, testing, best-practices]
triggers:
files: ['**/*.feature', '**/*.test.ts', '**/test_plan.md']
keywords: [test case, qa, bug report, testing standard]
---
# Quality Assurance Standards
## **Priority: P1 (HIGH)**
## 1. Test Case Granularity
- **1 Test Case = 1 Condition on 1 Screen**.
- **Split Screens**: "Order Details" & "Item Details" are separate.
- **Split Conditions**: "Config A" & "Config B" are separate.
- **No "OR" Logic**: Each TC must test a single, distinct path.
## 2. Naming Convention
- **Pattern**: `([Platform]) [Module]_[Action] on [Screen] when [Condition]`
- **Rule**: Only include `[Platform]` if requirement is exclusive to one platform (e.g., `[Mobile]`). Omit if it supports **Both**.
- **Example**: `Order_Verify payment term on Item Details when Toggle is OFF` (Supports Both)
## 3. Priority Levels
- **High**: Critical path, blocker bug.
- **Normal**: Standard validation, edge case.
- **Low**: Cosmetic, minor improvement.
## 4. References
- [Detailed Examples](references/test_case_standards.md)
This skill defines standards for creating high-quality, granular manual test cases and QA processes. It enforces one-condition-per-test-case granularity, a clear naming convention, and priority levels to streamline review and execution. The goal is predictable, traceable test artifacts that scale across platforms and teams.
The skill inspects manual test case authoring and QA workflows to ensure each test covers a single condition on one screen, prohibits OR logic, and applies a consistent naming pattern. It validates whether platform-specific tags are used only when exclusive and that priorities map to clear business impact. It also surfaces mismatches and suggests remediations to align test assets with the standard.
When should I include the [Platform] tag in the name?
Include [Platform] only when the requirement or behavior is specific to one platform; omit it when the same test applies to both web and mobile.
What if a flow truly requires multiple conditions?
Split the flow into multiple test cases, each verifying a single condition. Use a separate orchestration or end-to-end test to validate combined flows if needed.