home / skills / xfstudio / skills / screen-reader-testing

screen-reader-testing skill

/screen-reader-testing

This skill helps validate screen reader compatibility and debug accessibility by guiding practical testing steps for VoiceOver, NVDA, and JAWS.

npx playbooks add skill xfstudio/skills --skill screen-reader-testing

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
1.1 KB
---
name: screen-reader-testing
description: Test web applications with screen readers including VoiceOver, NVDA, and JAWS. Use when validating screen reader compatibility, debugging accessibility issues, or ensuring assistive technology support.
---

# Screen Reader Testing

Practical guide to testing web applications with screen readers for comprehensive accessibility validation.

## Use this skill when

- Validating screen reader compatibility
- Testing ARIA implementations
- Debugging assistive technology issues
- Verifying form accessibility
- Testing dynamic content announcements
- Ensuring navigation accessibility

## Do not use this skill when

- The task is unrelated to screen reader testing
- You need a different domain or tool outside this scope

## Instructions

- Clarify goals, constraints, and required inputs.
- Apply relevant best practices and validate outcomes.
- Provide actionable steps and verification.
- If detailed examples are required, open `resources/implementation-playbook.md`.

## Resources

- `resources/implementation-playbook.md` for detailed patterns and examples.

Overview

This skill tests web applications with screen readers including VoiceOver, NVDA, and JAWS to validate real-world assistive technology behavior. It helps identify accessibility gaps in ARIA usage, form controls, dynamic updates, and navigation semantics. Use it to reproduce issues, confirm fixes, and produce actionable verification steps.

How this skill works

The skill walks through targeted checks using each screen reader on supported platforms, describing interactions, expected output, and common failure patterns. It inspects ARIA roles, live regions, focus management, semantic HTML, and keyboard navigation while guiding testers to reproduce and log observations. It also suggests debugging steps for common assistive-technology mismatches and how to verify fixes.

When to use it

  • Validating compatibility with VoiceOver, NVDA, and JAWS
  • Testing ARIA implementation and live region announcements
  • Debugging assistive-technology-specific behavior
  • Verifying form labels, error announcements, and focus order
  • Testing dynamic content updates and single-page app notifications

Best practices

  • Define clear test goals and list target screen readers and platforms before starting
  • Test with real screen readers on real OS/browser combinations, not only automated tools
  • Check semantics first: native HTML, correct landmarks, descriptive labels, and logical tab order
  • Verify focus management and keyboard access for all interactive flows
  • Document exact steps, expected announcements, and screenshots or recordings for reproduction

Example use cases

  • Confirm that an SPA announces updated content via aria-live and maintains logical focus
  • Validate form field labeling, required/error announcements, and error recovery flow
  • Reproduce a report where a control is not announced by a specific screen reader and isolate ARIA conflicts
  • Verify keyboard-only workflows and skip-nav landmark behavior on complex pages

FAQ

Which screen readers should I prioritize?

Prioritize NVDA (Windows), JAWS (Windows), and VoiceOver (macOS/iOS). Choose the ones your users actually use, but test across platforms for broader coverage.

Can automated tools replace screen reader testing?

Automated tools catch many issues but cannot simulate the spoken output, keyboard focus experience, or nuanced assistive-technology behavior. Use them together with manual screen reader testing.