home / skills / gtmagents / gtm-agents / offer-testing
This skill helps you design and run copy experiments to optimize hooks, offers, and CTAs across channels.
npx playbooks add skill gtmagents/gtm-agents --skill offer-testingReview the files below or copy the command above to add this skill to your agents.
---
name: offer-testing
description: Use when designing copy experiments to optimize hooks, offers, and CTAs.
---
# Offer Testing Playbooks Skill
## When to Use
- Planning subject line/CTA/offer tests across email, ads, or landing pages.
- Validating new positioning or pricing language.
- Running copy refresh cycles for campaigns.
## Framework
1. **Hypothesis** – statement of expected lift + rationale.
2. **Variable Selection** – hook, CTA, body copy, offer framing, proof element.
3. **Segmentation** – define audience splits and holdouts.
4. **Metrics** – primary KPI + guardrails (opens, CTR, CVR, CPL, unsub, spam).
5. **Analysis** – statistical significance (chi-square, z-test) or Bayesian approach.
## Templates
- Experiment brief (variable, control, variant, KPI, sample size, duration).
- Results report (metric table, significance, insight, next steps).
- Prioritization matrix (ICE/RICE scoring).
## Tips
- Limit to one variable per test to isolate learnings.
- Ensure minimum sample sizes per channel before declaring winners.
- Log tests and learnings in a shared repository.
---
This skill helps design and run copy experiments to optimize hooks, offers, and CTAs across email, ads, and landing pages. It provides a concise testing framework, ready-to-use templates, and guidance for analysis and reporting. Use it to reduce guesswork and accelerate data-driven copy decisions.
The skill guides you through hypothesis definition, variable selection, audience segmentation, metric setup, and statistical analysis. It supplies experiment briefs, result-report templates, and a prioritization matrix to plan and sequence tests. Finally, it recommends analysis methods (frequentist or Bayesian) and clear stop/win criteria.
How many variables should I test at once?
Limit to one variable per experiment whenever possible. If you must test multiple variables, use a multivariate design and increase sample size accordingly.
When is a result statistically valid?
Declare validity after reaching the precomputed sample size and duration, and when tests meet your chosen significance threshold (or Bayesian probability). Also check guardrails like unsubscribe or spam rates.