home / skills / gruckion / marathon-ralph / visual-verification
This skill helps you verify UI features by performing browser-based checks and user interactions before marking work complete.
npx playbooks add skill gruckion/marathon-ralph --skill visual-verificationReview the files below or copy the command above to add this skill to your agents.
---
name: visual-verification
description: Visually verify implemented features work correctly before marking complete. Use when testing UI changes, verifying web features, or checking user flows work in the browser.
---
# Visual Verification
Verify implemented features work correctly through actual user interaction, not just automated tests.
## When to Use
- After implementing any UI feature
- Before marking an issue as complete
- When acceptance criteria involve user-visible behavior
- After fixing UI bugs
## Approaches
**Web Applications**: See [browser-verification.md](browser-verification.md) for Playwright MCP workflow.
**Mobile Applications**: See [mobile-verification.md](mobile-verification.md) *(Coming soon)*
## Verification Checklist
Before marking any UI task complete:
- [ ] Dev server is running and accessible
- [ ] Feature renders without console errors
- [ ] Elements render correctly and do not incorrectly overlap
- [ ] User interactions work as expected
- [ ] Edge cases handled (empty states, loading, errors)
- [ ] Screenshot captured as evidence
## What NOT to Do
- Skip visual verification because "tests pass"
- Mark issues complete without browser testing
- Assume dev mode catches all errors (run `npm run build` too)
- Test only happy paths
This skill helps you visually verify implemented features work correctly in a real browser before marking tasks complete. It emphasizes manual interaction, evidence capture, and checking edge cases beyond automated tests. Use it to reduce regressions and ensure user-facing behavior matches acceptance criteria.
I guide you through a quick manual verification workflow: start the dev server or production build, interact with the feature in a browser, and confirm rendering, interactions, and edge states. I prompt for console error checks, layout and overlap inspections, and capture screenshots as evidence. The process complements automated tests and is lightweight enough to run on every UI change.
Do I still need automated tests if I do visual verification?
Yes. Automated tests catch regressions and edge cases at scale. Visual verification complements tests by validating real user interaction and rendering that tests may miss.
How do I capture verification evidence?
Take screenshots or short screen recordings focused on the feature and edge states. Save them to the issue or CI artifact storage and link them in the task for reviewers.