Skip to main content
Split-test your links to find the highest-performing version A/B testing allows you to compare two different link variations to determine which performs better. Test different products, discounts, or customer experiences to optimize your conversion rates and maximize revenue.
Run A/B tests for at least one week to collect meaningful data, and avoid making changes during the test period to ensure accurate results.

How A/B Testing Works

When you create an A/B test, Checkout Links automatically:
  1. Splits traffic between your two link variations (Link A and Link B)
  2. Tracks performance for both variations independently
  3. Analyzes results with statistical confidence calculations
  4. Recommends a winner when sufficient data is collected
Your customers are automatically and consistently routed to the same variation for a seamless experience.

Creating an A/B Test

1

Create a new A/B test

Navigate to the A/B Tests section and click “Create test” to start a new split test.
2

Set up your test details

Add a descriptive name for internal tracking and configure your test shortcode and URL.
The test name is for your internal use only - customers won’t see it.
3

Select Link A and Link B

Choose two existing Checkout Links to compare in your test. These should be different variations of the same concept.
Link A and Link B must be different links. You cannot test a link against itself.
4

Configure traffic split

Set what percentage of traffic goes to each variation. The default 50/50 split works for most tests.
Traffic is split consistently - the same customer will always see the same variation.
5

Launch your test

Save and activate your A/B test. Share the test URL to start collecting data.
Your test URL will automatically route customers to either Link A or Link B based on your traffic settings.

Understanding Your Results

Winner Declaration

The Result card shows you which variation is performing better and provides guidance on when it’s safe to end your test. Confidence Levels:
  • More Data Needed: Less than 10 total sessions - keep testing
  • Low Confidence: Clear winner but need more data for reliability
  • Medium Confidence: Strong signal, consider ending test at 75%+ confidence
  • High Confidence: Statistically significant results - safe to end test

Key Metrics Comparison

Compare essential performance metrics between your variations:
  • Sessions: Total visitors to each variation
  • Orders: Completed purchases from each variation
  • Revenue: Total sales generated by each variation
  • Conversion rate: Percentage of visitors who complete a purchase
  • Revenue per visitor: Average revenue generated per session
  • AOV: Average order value for completed purchases
Track daily conversion rates over time to spot patterns and confirm consistent performance differences between variations.

Best Practices

Minimum Sample Size: Wait for at least 100 sessions total and 10 conversions per variation before making decisions.

Test Duration Guidelines

  • Minimum: 1 week to account for day-of-week effects
  • Typical: 2-4 weeks for reliable statistical significance
  • Maximum: 8 weeks to avoid external factors affecting results

When to End Your Test

End immediately if:
  • High confidence (95%+) with clear winner
  • One variation is significantly underperforming and hurting business
Continue testing if:
  • Results are too close to call (< 0.5% difference)
  • Low confidence (< 75%) regardless of apparent winner
  • Haven’t reached minimum sample size
Consider ending if:
  • Medium confidence (75-94%) with business pressure to decide
  • Clear practical significance even without statistical significance

Common Pitfalls to Avoid

Don’t peek too often: Checking results daily can lead to premature decisions. Set a schedule and stick to it.
  • Testing too many things: Focus on one major difference between variations
  • Changing tests mid-stream: Don’t modify either variation during the test period
  • Ending too early: Wait for statistical significance unless business impact is severe
  • Ignoring practical significance: A 0.1% improvement might be statistically significant but not worth implementing

Traffic Split Strategies

  • Best for: Most A/B tests
  • Fastest path to statistical significance
  • Equal exposure for both variations

90/10 Split

  • Best for: Testing risky changes
  • Limits exposure to potentially worse variation
  • Takes longer to reach significance

80/20 Split

  • Best for: Gradual rollouts
  • Balanced risk and speed
  • Good for testing new features

Troubleshooting

Your test needs more traffic to provide meaningful results. Share your test URL more widely or wait for organic traffic to build up data.Minimum thresholds:
  • 10 total sessions to start analysis
  • 100+ sessions for reliable results
When conversion rates differ by less than 0.5%, the difference may not be meaningful for your business. Consider:
  • Running the test longer to see if a clear pattern emerges
  • Accepting that both variations perform similarly
  • Testing more dramatic differences
If Link A or Link B shows no sessions or orders:
  • Verify both links work correctly
  • Check that traffic splitting is functioning
  • Ensure both variations are accessible to customers

Next Steps

Once you’ve completed your A/B test:
  1. Implement the winner by updating your main marketing campaigns
  2. Archive the test for future reference
  3. Start new tests to continue optimizing other aspects
  4. Document learnings to inform future testing strategies
I