Skip to main content
Learn how to apply A/B testing to common e-commerce scenarios to discover what drives more conversions and revenue for your store.

Common Testing Scenarios

Test Different Products

Scenario: You’re running a promotion and want to know which product generates more sales. Example Test:
  • Link A: Direct to your premium water bottle ($45)
  • Link B: Direct to your starter water bottle ($25)
What to measure: Which product drives more revenue and conversions Insight: Even if the cheaper option gets more orders, the premium option might generate higher total revenue per visitor.
Test products at similar price points first to understand conversion patterns before testing dramatically different prices.

Test Discount Strategies

Scenario: You want to know if percentage discounts or fixed amount discounts perform better. Example Test:
  • Link A: 20% off entire order
  • Link B: $10 off entire order
What to measure: Conversion rate and average order value (AOV) Insight: Percentage discounts often drive higher AOV, while fixed discounts can boost conversion rates on lower-value carts.
For best results, ensure both discount types offer similar value at your average order value so you’re testing messaging preference, not just discount size.

Test Free Shipping Thresholds

Scenario: Test whether free shipping or a discount drives more conversions. Example Test:
  • Link A: Free shipping (no minimum)
  • Link B: 15% off + standard shipping
What to measure: Conversion rate, revenue per visitor, and total revenue Insight: Free shipping often increases conversion rates, but percentage discounts can drive higher cart values.

Test Cart Bundles

Scenario: Compare different product bundles to find the highest-performing combination. Example Test:
  • Link A: Skincare routine bundle (cleanser + moisturizer + serum)
  • Link B: Skincare basics bundle (cleanser + moisturizer)
What to measure: Conversion rate and revenue per visitor Insight: Sometimes simpler bundles convert better despite lower AOV. Run the test to see what your customers prefer.
Make sure both bundles are positioned at similar value propositions (e.g., both “starter” or both “complete” sets) to get meaningful results.

Test Upsell Strategies

Scenario: Test different upsell approaches to maximize revenue. Example Test:
  • Link A: Main product only
  • Link B: Main product + recommended add-on (with small discount)
What to measure: Conversion rate, AOV, and revenue per visitor Insight: Adding a small, relevant upsell can increase AOV without hurting conversion rates if positioned correctly.

Test Seasonal vs. Evergreen Offers

Scenario: Determine if seasonal messaging or evergreen messaging performs better. Example Test:
  • Link A: “Spring Sale - 25% off all fitness gear”
  • Link B: “Limited Time - 25% off all fitness gear”
What to measure: Click-through rate and conversion rate Insight: Test whether urgency-based evergreen messaging outperforms seasonal campaigns for your audience.
For this test, the links themselves might be identical - you’re testing how the messaging in your marketing materials affects performance when people click through.

Best Practices for Successful Tests

Start With High-Impact Changes

Test major differences first:
  • Different product categories
  • Significant discount variations (10% vs 25%)
  • Bundle vs single product
Avoid testing minor variations like:
  • Button color changes (that’s for landing page A/B testing)
  • Small discount differences (15% vs 17%)

Focus on One Variable

Good test:
  • Link A: Product X with 20% off
  • Link B: Product Y with 20% off → Testing product preference
Bad test:
  • Link A: Product X with 20% off
  • Link B: Product Y with free shipping → Testing too many variables at once

Consider Your Traffic Volume

High traffic (500+ sessions/week):
  • Test subtle optimizations
  • Run shorter tests (1-2 weeks)
  • Aim for 95% confidence
Medium traffic (100-500 sessions/week):
  • Test more dramatic differences
  • Run longer tests (2-4 weeks)
  • Accept 75-85% confidence for business decisions
Low traffic (< 100 sessions/week):
  • Only test major differences
  • Be patient (4-8 weeks)
  • Look for practical significance over statistical significance
If you have low traffic, focus on one really important test rather than running multiple tests simultaneously.

Set Clear Success Criteria

Before launching your test, decide:
  • Primary metric: Usually conversion rate or revenue per visitor
  • Minimum improvement: What % increase makes it worth implementing?
  • Test duration: When will you make a decision?
  • Confidence threshold: What confidence level do you need?
Example criteria:
“We’ll run this test for 2 weeks. If one variation shows 10%+ higher conversion rate with 75%+ confidence, we’ll implement it. Otherwise, we’ll continue testing or try a different variation.”

Common Mistakes to Avoid

Problem: Running 3+ A/B tests simultaneously with overlapping audiences.Solution: Focus on one test at a time, or ensure tests target completely different traffic sources (e.g., Instagram vs Email).Why it matters: Overlapping tests dilute your traffic and make it take much longer to reach significance.
Problem: Seeing “Link A is winning!” after 2 days and ending the test.Solution: Wait for at least 1 week and 100+ sessions before making decisions.Why it matters: Early results are often misleading due to day-of-week effects, time-of-day patterns, and random variation.
Problem: Modifying Link A or Link B during the test period.Solution: Set up your test completely before launching and don’t touch it until you have results.Why it matters: Changes invalidate all previous data and force you to start over.
Problem: Implementing a “winner” that has 0.3% higher conversion rate.Solution: Set a minimum improvement threshold (e.g., 5% or 10%) that makes the change worthwhile.Why it matters: Tiny improvements often aren’t worth the effort to implement, and may not hold up over time.

Real-World Example: Discount Test

Store: Athletic apparel brand Goal: Maximize revenue from email campaign Traffic: ~1,000 email opens expected The Test:
  • Link A: 25% off sitewide
  • Link B: Buy 2 get 1 free on all items
Setup:
  • 50/50 traffic split
  • 2-week test duration
  • Primary metric: Revenue per visitor
  • Secondary metric: AOV
Results (after 2 weeks):
  • Link A: 1,247 sessions, 8,453revenue,8,453 revenue, 6.78 revenue/visitor
  • Link B: 1,198 sessions, 11,290revenue,11,290 revenue, 9.42 revenue/visitor
  • Winner: Link B (buy 2 get 1 free) with 39% higher revenue per visitor
  • Confidence: 91%
Implementation: The store updated their email campaign to use Buy 2 Get 1 Free messaging, resulting in 35%+ revenue increase on subsequent campaigns. Key Learning: Customers were more motivated by getting a “free” product than by a percentage discount, even when the math was similar.

Next Steps