Ben Young
Ben Young
January 17, 2020

A/B Testing is a tremendous practice for marketers to test two different variations. However, it is not bulletproof.

First, you need to think about context, what happened before/after people engaged with this A/B test. You want to make sure you haven’t tipped the results one way with a prior A/B test earlier in the consumer experience.

Then you need to establish is this a meaningful sample size. For that, you want to look at your overall population of users, and work backwards, use a Sample Size Calculator to do this. Often if you are using a third-party tool, they will help guide you if the results are based on a meaningful data set.

Check your gut too, do these results make sense. After all, most often, A/B tests are of human behavior. Is this what you’d expect? Sometimes it isn’t, and you want to dig into why that may be. To re-test your original assumptions.

The second is, what should we test now? Often A/B testing is part of an ongoing continuous process of discovery. Like an onion, peeling back each layer to discover the next set of learnings.


“Join