When interpreting results: (1) Wait for statistical significance (p < 0.05), (2) Check the effect size (is the improvement meaningful?), (3) Consider confidence intervals (the range of likely true values), (4) Don't over-interpret small differences. A "winner" needs both statistical significance AND practical significance.
Key Metrics to Understand
Conversion Rate
The percentage of visitors who completed the goal (e.g., 3.2% conversion rate means 3.2 out of 100 visitors converted).
Statistical Significance (p-value)
The probability of seeing this result if there's no real difference. p < 0.05 means <5% chance the result is due to random chance.
Confidence Interval
The range where the true effect likely falls. E.g., "+5% to +15%" means the real improvement is probably somewhere in that range.
Effect Size (Lift)
The magnitude of the difference. A 50% lift is more meaningful than a 2% lift, even if both are statistically significant.
Decision Framework
| Scenario | Action | Confidence |
|---|---|---|
| Clear winner (p < 0.05, large effect) | Implement the winner | High |
| Statistical tie (p > 0.05) | Keep control or run longer | Low |
| Significant but small effect | Consider if worth the effort | Medium |
| Large effect but not significant | Run longer to get more data | Medium |
Common Interpretation Mistakes
- • Calling winners too early: Wait for full sample size
- • Ignoring effect size: A 0.1% improvement isn't worth implementing
- • Cherry-picking metrics: Decide on primary metric before the test
- • Ignoring segments: Overall winner might be loser for key segments
Get Clear Results
ExperimentHQ shows you clear, actionable results with confidence intervals and recommendations. No statistics degree required.