When you run an A/B test, the Dynamic Yield predictive targeting engine runs in the background, searching for hidden personalization opportunities.
Predictive targeting tackles one of the biggest challenges of personalization: Experiences optimized for the average user. In A/B tests, a winning variation is selected based on the preferences of all users exposed to the test: If most users significantly prefer variation A, it is then served to all users.
Sometimes, however, there are subsets of users – audiences – for whom an alternative variation might work better. In this context, there might be a margin to improve the overall outcome by serving each audience with the variation they actually prefer.
Predictive targeting ensures that even tests that conclude without an overall winner have a chance to generate uplift by serving a seemingly "losing" variation to an audience that actually prefers it.
The following image shows a test that resulted in an overall downlift of -0.43%. Predictive targeting (shown in the overview panel) suggests an alternative course of action: If variation A is served to the "First Visit PDP" audience while all other users receive the control group, it could instead result in an uplift of +12.35%.
How it works
- Predictive targeting analyzes the results of every A/B test you run, scanning all audiences and selecting those that have a winning variation different from the leading/winning variation for the overall traffic.
- For each of the audiences, predictive targeting computes an alternative course of action – a personalization opportunity – that represents the expected uplift if the audience and the rest of the users (not in the audience) were served their preferred variation, respectively. As opposed to the status quo, in which all traffic is served the variation that is currently winning for the overall traffic.
- All resulting personalization opportunities are ranked based on the expected uplift they would generate, and the top 5 are presented as individual cards in the side panel of the A/B test report.
- Each card indicates:
- Which variations should be served to the audience and the rest of the traffic, respectively.
- The expected uplift of this course of action, as opposed to serving the same variation to all traffic.
- Click the additional options menu to apply the relevant audience to the rest of the report, and further analyze its results in detail.
- Click Show More under the first card to see additional opportunities, if available.
A full example
To better understand how personalization opportunities are calculated, imagine a test with the following results on the overall traffic, with Variation B leading, but no clear winner:
Overall traffic
Variation Name | Normalization Users | Metric Totals Purchases | Normalized Metric Purchases / User | Probability to Be Best |
Variation A | 50,000 | 12,000 | 0.24 | 30% |
Variation B | 50,000 | 20,000 | 0.4 | 70% |
If all the traffic (100,000 users) is served variation B, with a Normalized Metric of 0.4, the expected number of purchases would be approximately 40,000 (100,000 * 0.4).
Now, here is a breakdown of the results for the Mobile Users audience, where a different variation, Variation A, was declared a winner:
Mobile Users
Variation Name | Normalization Users | Metric Totals Purchases | Normalized Metric Purchases / User | Probability to Be Best |
Variation A | 10,000 | 7,000 | 0.7 | >99% 🏆 |
Variation B | 10,000 | 2,000 | 0.2 | <1% |
If all the traffic (20,000 users) is served variation A, with a Normalized Metric of 0.7, the expected number of purchases would be approximately 14,000 (20,000 * 0.7).
Next, look at the results for the complement of the Mobile Users audience, that is, all users who do not belong to the audience:
The rest of the users (not Mobile Users)
Variation Name | Normalization Users | Metric Totals Purchases | Normalized Metric Purchases / User | Probability to Be Best |
Variation A | 40,000 (50,000–10,000) |
5,000 (12,000–7,000) |
0.125 | <1% |
Variation B | 40,000 (50,000–10,000) |
18,000 (20,000–2,000) |
0.45 | >99% 🏆 |
Note how normalization and metric totals are just the difference between the overall traffic results and the Mobile Users audience results, and how now Variation B is declared a winner.
If the remaining traffic (80,000 users) is served variation B, with a normalized metric of 0.45, the expected number of purchases would be approximately 36,000 (80,000 * 0.45)
Summarizing our potential courses of action:
Course of Action | Normalization (Users) | Expected Metric Totals (Purchase) | Course of Action Uplift |
Serve Variation B to all Users | 100,000 | 40,000 | 0% |
Serve Variation A to Mobile traffic, Variation B to The rest |
100,000 (20,000 + 80,000) |
50,000 (14,000 + 36,000) |
25% |
Note how by serving Variation A to Mobile Users traffic and Variation B to the rest, an opportunity to increase the expected overall purchases by 25% is found.
This course of action will be surfaced as a personalization opportunity: Gain 25% uplift by applying Variation A to the Mobile Users audience and Variation B to the rest of the users.
Statistical significance of personalization opportunities
Personalization opportunities are screened using strict checks so that only the most robust possible recommendations are surfaced, given the available knowledge:
- Opportunities are calculated only for the primary metric of your test and based on data excluding outliers.
- The test must be live for more than 14 days.
- Any variation suggested for a particular course of action has been declared the winner for the targeted audience, and the alternative variation has been declared the winner for the remaining traffic. This is done using the threshold for Probability to Be Best selected in the test settings.
- The suggested course of action must offer results that are better than serving the leading variation to all users by at least 1%.
Exporting predictive targeting results
Download the CSV report to get a spreadsheet with the courses of action. The data in the report provides a deeper understanding of the predictive targeting opportunity.
Field | Definition |
Campaign Name | The name of the campaign. |
Experience Name | The name of the experience. |
Version ID | The internal version ID of the specific test. |
From Date | The start date of the version. |
To Date | The last date on which the predictive targeting model ran (when the version has ended, it will show the end date of the version). |
Metric Name | The name of the test's primary metric. |
Normalization Type Examples: |
Depending on stickiness and the type of metric, this is the name of the unit by which the chosen metric is normalized:
|
Course of Action Number | The priority of the course of action, ordered in ascending order by the uplift it's expected to generate, 1 being the highest uplift (top priority). |
Course of Action |
A lateral description of the course of action. For example: "Variation A" to users that belong to "Direct Traffic" audience and "Control Group" to the rest of the users. |
Course of Action Normalization Totals |
The course of action's count of the normalization type unit, by which the chosen metric is normalized. This value will be the same for all the courses of action. For example, if the normalization type is users, the normalization totals will show the sum of all users exposed to the test. |
Course of Action Normalized Metric |
The course of action's metric totals divided by the course of action's normalization unit count. |
Course of Action Expected Metric Totals |
The sum of events or event's value we expect to generate by applying the course of action. That number is the sum of Audience Metric Totals and Rest of Users Metric Totals. |
Course of Action Uplift |
The ratio of the Course of Action Normalized Metric value of each variation vs the normalized metric of the original leading variation of the test, minus 1. |
Audience Name |
The predictive targeting identifies this audience as insightful and recommends a relevant course of action. |
Audience Variation Name |
The variation that is significantly preferred by users belonging to the relevant audience. |
Audience Normalization Totals |
The audience's count of the normalization type unit, by which the chosen metric is normalized. |
Audience Normalized Metric |
The audience's normalized metric as calculated by the observed data in the test results. |
Audience Expected Metric Totals |
By serving the winning variation to all users in the audience, considering the audience normalized metric, we expect to generate these metric totals. This value is calculated by multiplying the audience normalization totals with the audience normalized Metric. |
Audience Uplift |
The ratio of the audience normalized metric value of each variation vs the normalized metric of the original leading variation of the test, minus 1. |
Rest of Users Variation Name |
The variation that is significantly preferred by the rest of the users (complement). |
Rest of Users Normalization Totals |
The rest of users' count of the normalization type unit, by which the chosen metric is normalized. |
Rest of Users Normalized Metric |
The rest of the users' normalized metric is calculated by dividing the remaining metric totals that are done by users who don't belong to the audience by the rest of users normalization totals. |
Rest of Users Expected Metric Totals |
The metric totals we expect to generate by serving the winning variation to the rest of the users, considering the rest of users normalized metric. |
Rest of Users Uplift |
The ratio of the rest of users normalized metric value of each variation vs the normalized metric of the original leading variation of the test, minus 1. |