Back to Blog
Implementation Guide

How to Calculate Statistical Significance Correctly

Updated December 2025
15 min read
TL;DR

Statistical significance means: The observed difference is unlikely to be due to chance (typically p < 0.05, meaning <5% chance of seeing this result if there's no real difference). Key rules: (1) Determine sample size before starting, (2) Don't peek at results early, (3) Wait for the full sample size before concluding.

What is Statistical Significance?

Statistical significance tells you whether an observed difference is likely real or just random noise.

Example: Variant B has 5% higher conversion rate than A. Is this a real improvement, or could it happen by chance? Statistical significance answers this question.

The Formula (Simplified)

For conversion rate tests, we use a two-proportion z-test:

z = (p1 - p2) / sqrt(p * (1-p) * (1/n1 + 1/n2))

Where:
- p1, p2 = conversion rates of variants
- n1, n2 = sample sizes
- p = pooled conversion rate

If |z| > 1.96, the result is significant at 95% confidence.

Common Mistakes

Peeking at results early

Checking results before reaching sample size inflates false positive rate

Stopping when significant

Stopping as soon as you see p < 0.05 leads to false positives

Ignoring sample size

Small samples can show "significant" results that aren't real

Multiple comparisons

Testing many metrics without correction increases false positives

The Correct Process

1

Calculate sample size before starting

Use a sample size calculator to determine how many visitors you need.

2

Run until you reach sample size

Don't peek at results. Don't stop early. Wait for the predetermined sample size.

3

Analyze results once

When you reach sample size, analyze results and make a decision.

Let Us Handle the Math

ExperimentHQ automatically calculates statistical significance using proper methods. No manual calculations needed.

Share this article

Ready to start A/B testing?

Free forever plan available. No credit card required.