home / skills / gtmagents / gtm-agents / workflow-testing

This skill guides you through workflow testing and QA to validate automation builds before launch, covering unit, integration, content, compliance, and

npx playbooks add skill gtmagents/gtm-agents --skill workflow-testing

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
1.3 KB
---
name: workflow-testing
description: Use when validating automation builds before launch or after significant
  changes.
---

# Workflow Testing & QA Skill

## When to Use
- Any new automation or major revision prior to go-live.
- Regression testing after data, asset, or logic changes.
- Investigating deliverability, conversion, or routing anomalies.

## Framework
1. **Unit Tests** – confirm each branch, wait step, and action path with seed contacts.
2. **Integration Tests** – verify webhook/API calls, CRM updates, enrichment, scoring.
3. **Content QA** – links, tracking, personalization tokens, accessibility, localization.
4. **Compliance** – consent, suppression, GDPR/CASL/CCPA rules, regional requirements.
5. **Performance** – throttle checks, concurrency, error handling, failover.

## Checklist
- Seed list matrix (personas, stages, regions, consent flags).
- Device/browser testing for email/SMS/push rendering.
- Logging + alerting validation.
- Rollback and kill switches documented.

## Templates
- QA evidence log (screenshot, recipient, status, owner).
- Incident runbook for automation failures.
- Release checklist referencing stakeholders.

## Tips
- Automate regression tests via APIs or synthetic users.
- Store test data separately and purge regularly to avoid reporting noise.
- Use feature flags to stage rollouts before full scale.

---

Overview

This skill provides a production-ready workflow testing and QA toolkit for validating automation builds before launch or after major changes. It consolidates unit, integration, content, compliance, and performance checks into a repeatable framework. The goal is to reduce defects, ensure deliverability and compliance, and provide clear evidence for release decisions.

How this skill works

The skill inspects automation logic with layered tests: unit tests for branch and wait-step behavior, integration tests for webhooks and CRM actions, and content QA for links, tokens, and rendering. It verifies compliance rules, consent flags, and suppression lists, and runs performance checks for throttling, concurrency, and error handling. Test artifacts and logs are recorded to a QA evidence log and automated where possible using APIs or synthetic users.

When to use it

  • Before any new automation or major revision goes live
  • After data model, asset, or logic changes to catch regressions
  • When investigating delivery, conversion, or routing anomalies
  • Prior to staged rollouts controlled by feature flags
  • During periodic audits for compliance or performance drift

Best practices

  • Build a seed list matrix covering personas, funnel stages, regions, and consent states
  • Automate regression tests via APIs or synthetic users and run them in CI
  • Keep test data separate and purge regularly to avoid analytics contamination
  • Validate device/browser rendering for email, SMS, and push across clients
  • Document rollback, kill switches, and incident runbooks with clear owners

Example use cases

  • Run unit and integration tests after swapping an enrichment provider to ensure CRM fields still map correctly
  • Validate personalized email tokens, tracking links, and A/B variants before a campaign launch
  • Confirm suppression and consent logic after a GDPR policy update to prevent unlawful sends
  • Stress-test throttle and concurrency settings before scaling a high-volume transactional workflow
  • Produce QA evidence logs for sign-off during a staged feature-flag rollout

FAQ

How do I avoid test data polluting production reports?

Store test records in a separate dataset or tag them clearly, and purge synthetic data on a schedule to keep analytics clean.

Can tests be automated?

Yes. Automate unit and integration checks through APIs or synthetic users, and integrate them into CI pipelines for repeatable regression testing.