A/A testing means running two identical variants (A vs A) to validate your testing setup. If you see significant differences when testing identical experiences, something is broken: tracking, randomization, or your tool. Run an A/A test before your first real experiment to catch issues early.
What is A/A Testing?
A/A testing is when you run an experiment where both variants are identical:
Variant A: Original page
Variant B: Exact same page (no changes)
Expected result: No difference in conversion rates. If you see a significant difference, your setup is broken.
Why Run an A/A Test?
Validate randomization
Ensure traffic is split evenly (50/50 should be 50/50)
Detect tracking issues
Confirm both variants track conversions correctly
Measure false positive rate
See if you get "significant" results when there shouldn't be any
Test your testing tool
Verify the A/B testing platform works correctly
What to Check in an A/A Test
| Metric | Expected Result | If Wrong, Indicates |
|---|---|---|
| Traffic split | 50/50 ± 2% | Sample ratio mismatch |
| Conversion rate | No significant difference | Tracking or randomization issue |
| p-value | > 0.05 | False positive or setup error |
| Variant assignment | Consistent per user | Cookie/session issues |
When to Run an A/A Test
- • Before your first real experiment: Validate setup
- • After changing your tracking: Confirm it still works
- • When results seem suspicious: Rule out technical issues
- • Periodically (quarterly): Catch drift in your setup
Common Issues A/A Tests Catch
Sample Ratio Mismatch
Traffic split is 55/45 instead of 50/50 — indicates randomization bug
Tracking Discrepancy
One variant tracks fewer conversions — tracking code issue
Bot Traffic
Significant difference appears — bots aren't randomized properly
Cookie Issues
Users see different variants on refresh — session handling broken
Run Your First A/A Test
ExperimentHQ makes it easy to run A/A tests. Create an experiment with two identical variants and validate your setup.