A/B Test Calculator
Calculate statistical significance of your experiments
Control
Variant
Statistical Analysis
Results at selected confidence level
🧪 What is A/B Testing?
A/B testing (also called split testing) is a method of comparing two versions of a webpage, email, or other marketing asset to determine which one performs better. You split your audience randomly between version A (control) and version B (variant), then measure which version achieves more conversions.
The key challenge is determining whether the difference in performance is statistically significant or just due to random chance. This calculator uses a two-proportion z-test to determine if your results are reliable enough to make a decision.
📐 The Math Behind It
The p-value represents the probability that the observed difference occurred by chance. A p-value below your threshold (e.g., 0.05 for 95% confidence) indicates statistical significance.
📏 Sample Size Requirements
The sample size you need depends on your baseline conversion rate and the minimum detectable effect (MDE) you want to measure.
| Baseline Rate | 10% MDE | 20% MDE | 50% MDE |
|---|---|---|---|
| 1% | 190,000 |
48,000 |
7,700 |
| 3% | 62,000 |
15,500 |
2,500 |
| 5% | 36,400 |
9,100 |
1,500 |
| 10% | 17,200 |
4,300 |
700 |
| 20% | 7,700 |
1,900 |
310 |
*Per variant, at 95% confidence and 80% statistical power. MDE = Minimum Detectable Effect (relative change).