Common Testing Scenarios
Test Different Products
Scenario: You’re running a promotion and want to know which product generates more sales. Example Test:- Link A: Direct to your premium water bottle ($45)
- Link B: Direct to your starter water bottle ($25)
Test Discount Strategies
Scenario: You want to know if percentage discounts or fixed amount discounts perform better. Example Test:- Link A: 20% off entire order
- Link B: $10 off entire order
For best results, ensure both discount types offer similar value at your average order value so you’re testing messaging preference, not just discount size.
Test Free Shipping Thresholds
Scenario: Test whether free shipping or a discount drives more conversions. Example Test:- Link A: Free shipping (no minimum)
- Link B: 15% off + standard shipping
Test Cart Bundles
Scenario: Compare different product bundles to find the highest-performing combination. Example Test:- Link A: Skincare routine bundle (cleanser + moisturizer + serum)
- Link B: Skincare basics bundle (cleanser + moisturizer)
Test Upsell Strategies
Scenario: Test different upsell approaches to maximize revenue. Example Test:- Link A: Main product only
- Link B: Main product + recommended add-on (with small discount)
Test Seasonal vs. Evergreen Offers
Scenario: Determine if seasonal messaging or evergreen messaging performs better. Example Test:- Link A: “Spring Sale - 25% off all fitness gear”
- Link B: “Limited Time - 25% off all fitness gear”
For this test, the links themselves might be identical - you’re testing how the messaging in your marketing materials affects performance when people click through.
Best Practices for Successful Tests
Start With High-Impact Changes
Test major differences first:- Different product categories
- Significant discount variations (10% vs 25%)
- Bundle vs single product
- Button color changes (that’s for landing page A/B testing)
- Small discount differences (15% vs 17%)
Focus on One Variable
Good test:- Link A: Product X with 20% off
- Link B: Product Y with 20% off → Testing product preference
- Link A: Product X with 20% off
- Link B: Product Y with free shipping → Testing too many variables at once
Consider Your Traffic Volume
High traffic (500+ sessions/week):- Test subtle optimizations
- Run shorter tests (1-2 weeks)
- Aim for 95% confidence
- Test more dramatic differences
- Run longer tests (2-4 weeks)
- Accept 75-85% confidence for business decisions
- Only test major differences
- Be patient (4-8 weeks)
- Look for practical significance over statistical significance
Set Clear Success Criteria
Before launching your test, decide:- Primary metric: Usually conversion rate or revenue per visitor
- Minimum improvement: What % increase makes it worth implementing?
- Test duration: When will you make a decision?
- Confidence threshold: What confidence level do you need?
“We’ll run this test for 2 weeks. If one variation shows 10%+ higher conversion rate with 75%+ confidence, we’ll implement it. Otherwise, we’ll continue testing or try a different variation.”
Common Mistakes to Avoid
Testing too many things at once
Testing too many things at once
Problem: Running 3+ A/B tests simultaneously with overlapping audiences.Solution: Focus on one test at a time, or ensure tests target completely different traffic sources (e.g., Instagram vs Email).Why it matters: Overlapping tests dilute your traffic and make it take much longer to reach significance.
Ending tests too early
Ending tests too early
Problem: Seeing “Link A is winning!” after 2 days and ending the test.Solution: Wait for at least 1 week and 100+ sessions before making decisions.Why it matters: Early results are often misleading due to day-of-week effects, time-of-day patterns, and random variation.
Changing the test mid-stream
Changing the test mid-stream
Problem: Modifying Link A or Link B during the test period.Solution: Set up your test completely before launching and don’t touch it until you have results.Why it matters: Changes invalidate all previous data and force you to start over.
Ignoring practical significance
Ignoring practical significance
Problem: Implementing a “winner” that has 0.3% higher conversion rate.Solution: Set a minimum improvement threshold (e.g., 5% or 10%) that makes the change worthwhile.Why it matters: Tiny improvements often aren’t worth the effort to implement, and may not hold up over time.
Real-World Example: Discount Test
Store: Athletic apparel brand Goal: Maximize revenue from email campaign Traffic: ~1,000 email opens expected The Test:- Link A: 25% off sitewide
- Link B: Buy 2 get 1 free on all items
- 50/50 traffic split
- 2-week test duration
- Primary metric: Revenue per visitor
- Secondary metric: AOV
- Link A: 1,247 sessions, 6.78 revenue/visitor
- Link B: 1,198 sessions, 9.42 revenue/visitor
- Winner: Link B (buy 2 get 1 free) with 39% higher revenue per visitor
- Confidence: 91%