Tips & Tricks: What happens when there is no clear winner?
Hi there,
Very often, even after you’ve been running an A/B test for a few weeks (or even months!), there is still no clear winner. However, this doesn’t mean you should write it off as a failed campaign.
However, the results of these tests are just as informative as those with a winning variation, and looking into performance metrics can reveal what is and isn’t working.
Below, I have listed the actions you should take when you run into this situation.
Analyze your audiences: If a variation performs better with specific audiences, consider creating new tests specifically tailored for these segments.
In the campaign below, no winning variation has been declared. However, when we zoom in and look at the performance of two specific audiences, the results differ quite a bit and clearly show a preferred variation for each of the audiences.
For Audience #1, the Affinity variation is the winner with a 107% uplift, and for Audience #2, the Automatic variation is the winner with a 10% uplift.
Investigate the secondary metrics: While the primary metric is important, always assess the performance of secondary metrics as well.
For example, in the campaign below, the primary metric (revenue) is showing positive uplift but has not yet reached the threshold of 95% required for statistical significance. However, the secondary metric (purchases) does have a winning variation.
If the test falls flat: Don’t be discouraged! Take a step back and assess other elements of the campaign that could be affecting its performance. Are the variations too similar to each other? Is the experience hidden and going unnoticed? Is the experience better suited for a different stage in the customer journey?
These learnings will help you build future tests, ensuring you witness real impact with every new iteration.
Best,
Meira Farber
Related resources
- Learn more about Probability to Be Best and Statistical Significance here
Please sign in to leave a comment.
Comments
0 comments