home / skills / gtmagents / gtm-agents / workflow-testing
This skill guides you through workflow testing and QA to validate automation builds before launch, covering unit, integration, content, compliance, and
npx playbooks add skill gtmagents/gtm-agents --skill workflow-testingReview the files below or copy the command above to add this skill to your agents.
---
name: workflow-testing
description: Use when validating automation builds before launch or after significant
changes.
---
# Workflow Testing & QA Skill
## When to Use
- Any new automation or major revision prior to go-live.
- Regression testing after data, asset, or logic changes.
- Investigating deliverability, conversion, or routing anomalies.
## Framework
1. **Unit Tests** – confirm each branch, wait step, and action path with seed contacts.
2. **Integration Tests** – verify webhook/API calls, CRM updates, enrichment, scoring.
3. **Content QA** – links, tracking, personalization tokens, accessibility, localization.
4. **Compliance** – consent, suppression, GDPR/CASL/CCPA rules, regional requirements.
5. **Performance** – throttle checks, concurrency, error handling, failover.
## Checklist
- Seed list matrix (personas, stages, regions, consent flags).
- Device/browser testing for email/SMS/push rendering.
- Logging + alerting validation.
- Rollback and kill switches documented.
## Templates
- QA evidence log (screenshot, recipient, status, owner).
- Incident runbook for automation failures.
- Release checklist referencing stakeholders.
## Tips
- Automate regression tests via APIs or synthetic users.
- Store test data separately and purge regularly to avoid reporting noise.
- Use feature flags to stage rollouts before full scale.
---
This skill provides a production-ready workflow testing and QA toolkit for validating automation builds before launch or after major changes. It consolidates unit, integration, content, compliance, and performance checks into a repeatable framework. The goal is to reduce defects, ensure deliverability and compliance, and provide clear evidence for release decisions.
The skill inspects automation logic with layered tests: unit tests for branch and wait-step behavior, integration tests for webhooks and CRM actions, and content QA for links, tokens, and rendering. It verifies compliance rules, consent flags, and suppression lists, and runs performance checks for throttling, concurrency, and error handling. Test artifacts and logs are recorded to a QA evidence log and automated where possible using APIs or synthetic users.
How do I avoid test data polluting production reports?
Store test records in a separate dataset or tag them clearly, and purge synthetic data on a schedule to keep analytics clean.
Can tests be automated?
Yes. Automate unit and integration checks through APIs or synthetic users, and integrate them into CI pipelines for repeatable regression testing.