home / skills / talmolab / sleap / qt-testing
This skill captures offscreen Qt widget screenshots for visual verification of rendering, layouts, and appearance using vision analysis.
npx playbooks add skill talmolab/sleap --skill qt-testingReview the files below or copy the command above to add this skill to your agents.
---
name: qt-testing
description: Capture and visually inspect Qt GUI widgets using screenshots. Use when asked to verify GUI rendering, test widget appearance, check layouts, or visually inspect any PySide6/Qt component. Enables Claude to "see" Qt interfaces by capturing offscreen screenshots and analyzing them with vision.
---
# Qt GUI Testing
Capture screenshots of Qt widgets for visual inspection without displaying windows on screen.
## Quick Start
```python
# Capture any widget
from scripts.qt_capture import capture_widget
path = capture_widget(my_widget, "description_here")
# Then read the screenshot with the Read tool
```
## Core Script
Run `scripts/qt_capture.py` or import `capture_widget` from it:
```bash
# Standalone test
uv run --with PySide6 python .claude/skills/qt-testing/scripts/qt_capture.py
```
## Output Location
All screenshots save to: `scratch/.qt-screenshots/`
Naming: `{YYYY-MM-DD.HH-MM-SS}_{description}.png`
## Workflow
1. Create/obtain the widget to test
2. Call `capture_widget(widget, "description")`
3. Read the saved screenshot with the Read tool
4. Analyze with vision to verify correctness
## Interaction Pattern
To interact with widgets (click buttons, etc.):
```python
# Find widget at coordinates (from vision analysis)
target = widget.childAt(x, y)
# Trigger it directly (not mouse events)
if hasattr(target, 'click'):
target.click()
QApplication.processEvents()
# Capture result
capture_widget(widget, "after_click")
```
## Example: Test a Dialog
```python
import sys
from PySide6.QtWidgets import QApplication
from sleap.gui.learning.dialog import TrainingEditorDialog
# Add skill scripts to path
sys.path.insert(0, ".claude/skills/qt-testing")
from scripts.qt_capture import capture_widget, init_qt
app = init_qt()
dialog = TrainingEditorDialog()
path = capture_widget(dialog, "training_dialog")
dialog.close()
print(f"Inspect: {path}")
```
## Key Points
- Uses `Qt.WA_DontShowOnScreen` - no window popup
- Renders identically to on-screen display (verified)
- Call `processEvents()` after interactions before capture
- Use `childAt(x, y)` to map vision coordinates to widgets
- Direct method calls (`.click()`) work; simulated mouse events don't
This skill captures offscreen screenshots of PySide6/Qt widgets for visual inspection and automated GUI testing. It renders widgets without showing windows on screen and saves high-fidelity images suitable for vision-based analysis. Use it to verify rendering, check layouts, and inspect widget state before and after interactions.
The core function initializes a Qt context and renders a widget with Qt.WA_DontShowOnScreen, producing an identical rendering to on-screen display but without popping up windows. Screenshots are saved to a dedicated folder with timestamped filenames. After programmatic interactions you call QApplication.processEvents() and recapture to compare before/after visuals. Vision tools can read the saved PNG files and map coordinates back to widget children via childAt(x,y).
Where are screenshots saved?
Screenshots are saved to scratch/.qt-screenshots/ with names like YYYY-MM-DD.HH-MM-SS_description.png.
Do screenshots require a visible display?
No. The skill uses Qt.WA_DontShowOnScreen to render widgets offscreen with identical visuals to on-screen display.
Can I interact with widgets before capturing?
Yes. Call widget methods (e.g., .click()) or modify state programmatically, then call QApplication.processEvents() before capture for reliable results.