home / skills / ominou5 / funnel-architect-plugin / ab-testing

ab-testing skill

/skills/ab-testing

This skill helps you design and run rigorous A/B tests on funnel pages, defining tests, variants, significance, and results documentation.

npx playbooks add skill ominou5/funnel-architect-plugin --skill ab-testing

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.7 KB
---
name: ab-testing
description: >
  A/B testing strategy and implementation for funnel pages. Defines what to test,
  how to structure variants, statistical significance thresholds,
  and common testing patterns.
---

# A/B Testing

Test everything. Opinions are nice β€” data is better.

## What to Test (Priority Order)

| Priority | Element | Expected Impact |
|---|---|---|
| πŸ”΄ P0 | Headline | 10–50% lift |
| πŸ”΄ P0 | CTA text + color | 5–30% lift |
| 🟑 P1 | Hero image/video | 5–20% lift |
| 🟑 P1 | Form fields (fewer vs. more) | 10–40% lift |
| 🟑 P1 | Social proof placement | 5–15% lift |
| 🟒 P2 | Page layout (long vs. short) | 5–20% lift |
| 🟒 P2 | Pricing display | 5–25% lift |
| 🟒 P2 | Urgency messaging | 3–15% lift |
| πŸ”΅ P3 | Color scheme | 2–10% lift |
| πŸ”΅ P3 | Font choices | 1–5% lift |

## Testing Rules

1. **Test one variable at a time** β€” Change only the element being tested
2. **50/50 split** β€” Equal traffic to each variant
3. **Minimum sample size** β€” At least 100 conversions per variant before calling a winner
4. **Statistical significance** β€” Wait for 95% confidence before declaring a winner
5. **Run for at least 7 days** β€” Captures day-of-week variations
6. **Document everything** β€” Record hypothesis, variant details, and results

## Test Hypothesis Template

```
HYPOTHESIS: If we change [element] from [current] to [proposed],
then [metric] will [increase/decrease] by [estimated %]
because [reasoning based on conversion principles].

TEST SETUP:
- Control (A): [Current version description]
- Variant (B): [New version description]
- Primary metric: [Conversion rate / Click rate / etc.]
- Secondary metric: [Revenue / Engagement / etc.]
- Required sample: [Number] visitors per variant
- Estimated duration: [X] days at [Y] daily visitors
```

## Common Tests by Page Type

### Opt-In Page
- Headline: Problem-focused vs. Solution-focused
- CTA: "Get Free Access" vs. "Download Now" vs. "Send Me the Guide"
- Form: Email only vs. Name + Email
- Social proof: Subscriber count vs. Testimonial

### Sales Page
- Long-form vs. Short-form copy
- Video sales letter vs. Text
- Testimonials at top vs. After offer
- Payment: One-time vs. Payment plan (default)

### Pricing Page
- 2 plans vs. 3 plans
- Annual default vs. Monthly default
- Feature comparison table vs. Simple list
- "Most Popular" badge placement

## Results Tracking

After each test, log:

```
TEST: [Test Name]
DATE: [Start] β†’ [End]
TRAFFIC: [Total visitors] ([Per variant])
RESULTS:
  Control: [X]% conversion ([N] conversions)
  Variant: [Y]% conversion ([N] conversions)
WINNER: [Control/Variant]
LIFT: [+/- X]%
CONFIDENCE: [X]%
NEXT: [What to test next based on learnings]
```

Overview

This skill provides a practical A/B testing strategy and implementation guide for funnel pages. It defines priority elements to test, how to structure variants, statistical thresholds, and standard documentation templates. The focus is on repeatable tests that drive measurable lifts in conversion metrics across opt-in, sales, and pricing pages.

How this skill works

The skill inspects funnel page elements and recommends test priorities based on expected impact (headline, CTA, hero assets, forms, layout, pricing, urgency, and visual design). It prescribes test setup rules: single-variable tests, equal traffic splits, minimum sample sizes, 95% confidence, and a minimum run of seven days. Templates for hypothesis framing and result logs make tests auditable and actionable.

When to use it

  • Launching a new funnel page to validate messaging and layout
  • Improving underperforming opt-in, sales, or pricing pages
  • Prioritizing design and copy changes that impact conversion rate
  • Deciding between product offers, plan structures, or checkout flows
  • Testing mobile vs. desktop variations and page speed tradeoffs

Best practices

  • Test one variable at a time to isolate impact
  • Start with high-impact elements: headline, CTA, and form fields
  • Use 50/50 traffic splits and ensure at least 100 conversions per variant
  • Wait for 95% statistical confidence and run tests for a full week to include weekday cycles
  • Document hypothesis, setup, raw results, and next experiment to build institutional knowledge
  • Track primary and secondary metrics to avoid local wins that hurt revenue or engagement

Example use cases

  • Opt-in page headline: problem-focused vs. solution-focused to boost email signups
  • CTA experiment: button copy and color to increase click-throughs on the hero section
  • Form simplification: email-only vs. name+email to measure lift in submissions
  • Pricing page: two plans vs. three plans and placement of a β€œMost Popular” badge
  • Sales page layout: long-form vs. short-form copy to compare conversion and average order value

FAQ

What sample size should I use?

Aim for at least 100 conversions per variant; calculate required visitors based on your baseline conversion rate to reach that number.

How long should a test run?

Run tests at least 7 days to capture weekday variation and keep running until you reach 95% confidence or the minimum sample size is met.