home / skills / gtmagents / gtm-agents / offer-testing

This skill helps you design and run copy experiments to optimize hooks, offers, and CTAs across channels.

npx playbooks add skill gtmagents/gtm-agents --skill offer-testing

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
1.1 KB
---
name: offer-testing
description: Use when designing copy experiments to optimize hooks, offers, and CTAs.
---

# Offer Testing Playbooks Skill

## When to Use
- Planning subject line/CTA/offer tests across email, ads, or landing pages.
- Validating new positioning or pricing language.
- Running copy refresh cycles for campaigns.

## Framework
1. **Hypothesis** – statement of expected lift + rationale.
2. **Variable Selection** – hook, CTA, body copy, offer framing, proof element.
3. **Segmentation** – define audience splits and holdouts.
4. **Metrics** – primary KPI + guardrails (opens, CTR, CVR, CPL, unsub, spam).
5. **Analysis** – statistical significance (chi-square, z-test) or Bayesian approach.

## Templates
- Experiment brief (variable, control, variant, KPI, sample size, duration).
- Results report (metric table, significance, insight, next steps).
- Prioritization matrix (ICE/RICE scoring).

## Tips
- Limit to one variable per test to isolate learnings.
- Ensure minimum sample sizes per channel before declaring winners.
- Log tests and learnings in a shared repository.

---

Overview

This skill helps design and run copy experiments to optimize hooks, offers, and CTAs across email, ads, and landing pages. It provides a concise testing framework, ready-to-use templates, and guidance for analysis and reporting. Use it to reduce guesswork and accelerate data-driven copy decisions.

How this skill works

The skill guides you through hypothesis definition, variable selection, audience segmentation, metric setup, and statistical analysis. It supplies experiment briefs, result-report templates, and a prioritization matrix to plan and sequence tests. Finally, it recommends analysis methods (frequentist or Bayesian) and clear stop/win criteria.

When to use it

  • Planning A/B tests for subject lines, CTAs, headlines, or offer framing
  • Validating new positioning, pricing language, or benefit statements
  • Refreshing campaign copy across email, ads, or landing pages
  • Designing experiments with strict guardrails for opens, CTR, CVR, CPL
  • Prioritizing multiple copy ideas when resources or sample sizes are limited

Best practices

  • Test one variable at a time to isolate causal effects
  • Define a clear hypothesis with expected lift and rationale before launching
  • Set primary KPI and guardrail metrics (opens, CTR, CVR, CPL, unsubscribes, spam)
  • Calculate minimum sample size per channel and honor minimum duration to avoid early stopping
  • Log experiments, variants, results, and learnings in a shared repository for reuse

Example use cases

  • Email subject line test to improve open rate with a conservative holdout group
  • Two-variant CTA test on a landing page to increase conversion rate for a trial signup
  • Pricing language experiment comparing perceived value phrasing across paid ads
  • Offer framing test (discount vs. bonus) across the same audience segment to measure CPL impact
  • Copy refresh cycle: run prioritized sequential tests using ICE/RICE scoring until a clear winner emerges

FAQ

How many variables should I test at once?

Limit to one variable per experiment whenever possible. If you must test multiple variables, use a multivariate design and increase sample size accordingly.

When is a result statistically valid?

Declare validity after reaching the precomputed sample size and duration, and when tests meet your chosen significance threshold (or Bayesian probability). Also check guardrails like unsubscribe or spam rates.