Run A/B tests for at least one week to collect meaningful data, and avoid making changes during the test period to ensure accurate results.
How A/B Testing Works
When you create an A/B test, Checkout Links automatically:- Splits traffic between your two link variations (Link A and Link B)
- Tracks performance for both variations independently
- Analyzes results with statistical confidence calculations
- Recommends a winner when sufficient data is collected
Creating an A/B Test
1
Create a new A/B test
Navigate to the A/B Tests section and click “Create test” to start a new split test.
2
Set up your test details
Add a descriptive name for internal tracking and configure your test shortcode and URL.
The test name is for your internal use only - customers won’t see it.
3
Select Link A and Link B
Choose two existing Checkout Links to compare in your test. These should be different variations of the same concept.
Link A and Link B must be different links. You cannot test a link against itself.
4
Configure traffic split
Set what percentage of traffic goes to each variation. The default 50/50 split works for most tests.
Traffic is split consistently - the same customer will always see the same variation.
5
Launch your test
Save and activate your A/B test. Share the test URL to start collecting data.
Your test URL will automatically route customers to either Link A or Link B based on your traffic settings.
Understanding Your Results
Winner Declaration
The Result card shows you which variation is performing better and provides guidance on when it’s safe to end your test. Confidence Levels:- More Data Needed: Less than 10 total sessions - keep testing
- Low Confidence: Clear winner but need more data for reliability
- Medium Confidence: Strong signal, consider ending test at 75%+ confidence
- High Confidence: Statistically significant results - safe to end test
Key Metrics Comparison
Compare essential performance metrics between your variations:- Sessions: Total visitors to each variation
- Orders: Completed purchases from each variation
- Revenue: Total sales generated by each variation
- Conversion rate: Percentage of visitors who complete a purchase
- Revenue per visitor: Average revenue generated per session
- AOV: Average order value for completed purchases
Conversion Rate Trends
Track daily conversion rates over time to spot patterns and confirm consistent performance differences between variations.Best Practices
Minimum Sample Size: Wait for at least 100 sessions total and 10 conversions per variation before making decisions.
Test Duration Guidelines
- Minimum: 1 week to account for day-of-week effects
- Typical: 2-4 weeks for reliable statistical significance
- Maximum: 8 weeks to avoid external factors affecting results
When to End Your Test
End immediately if:- High confidence (95%+) with clear winner
- One variation is significantly underperforming and hurting business
- Results are too close to call (< 0.5% difference)
- Low confidence (< 75%) regardless of apparent winner
- Haven’t reached minimum sample size
- Medium confidence (75-94%) with business pressure to decide
- Clear practical significance even without statistical significance
Common Pitfalls to Avoid
Don’t peek too often: Checking results daily can lead to premature decisions. Set a schedule and stick to it.
- Testing too many things: Focus on one major difference between variations
- Changing tests mid-stream: Don’t modify either variation during the test period
- Ending too early: Wait for statistical significance unless business impact is severe
- Ignoring practical significance: A 0.1% improvement might be statistically significant but not worth implementing
Traffic Split Strategies
50/50 Split (Recommended)
- Best for: Most A/B tests
- Fastest path to statistical significance
- Equal exposure for both variations
90/10 Split
- Best for: Testing risky changes
- Limits exposure to potentially worse variation
- Takes longer to reach significance
80/20 Split
- Best for: Gradual rollouts
- Balanced risk and speed
- Good for testing new features
Troubleshooting
Test shows 'Too early to tell'
Test shows 'Too early to tell'
Your test needs more traffic to provide meaningful results. Share your test URL more widely or wait for organic traffic to build up data.Minimum thresholds:
- 10 total sessions to start analysis
- 100+ sessions for reliable results
Results are 'Too close'
Results are 'Too close'
When conversion rates differ by less than 0.5%, the difference may not be meaningful for your business. Consider:
- Running the test longer to see if a clear pattern emerges
- Accepting that both variations perform similarly
- Testing more dramatic differences
One variation has no data
One variation has no data
If Link A or Link B shows no sessions or orders:
- Verify both links work correctly
- Check that traffic splitting is functioning
- Ensure both variations are accessible to customers
Next Steps
Once you’ve completed your A/B test:- Implement the winner by updating your main marketing campaigns
- Archive the test for future reference
- Start new tests to continue optimizing other aspects
- Document learnings to inform future testing strategies