This replaces the Global Control Group test. If you're currently using Global Control Group, see the section Upgrade from the Global Control Group to learn how to move to this new report.
Measure the overall impact of your personalization program by running a global test (previously known as Global Control Group test) comparing users who receive personalized Dynamic Yield experiences and users who don’t.
For example:
95% of users are in the Dynamic Yield Experiences group These users are served multiple experiences you want to measure the impact of. |
5% of users are in the Global Control group These users receive, to the extent possible, a non-Dynamic Yield experience. |
Experience A: PDP Recommendations | – |
Experience B: Exit Intent | – |
Experience C: Personalized HP Banners | Experience C (baseline): Static HP Banners |
By comparing the performance of the Dynamic Yield Experiences and Control Group variations, you can estimate the total impact of experiences A, B, and C.
Note: To start measuring the overall impact of your campaigns using a global test, you must turn on a feature flag in your section. Contact your Customer Success Manager for more details.
Start measuring the impact of your personalization
Step 1: Add experiences to the test
Adding experiences to the global test is an ongoing process. If you have personalized experiences, target them (using the Global Test targeting condition) to the Dynamic Yield Experiences group. This ensures that only users in the 95% population group receive those experiences.
If the campaign requires a default experience (served to all traffic, including the 5% control group), create the experience and serve it to the Global Control group using the same targeting condition.
If you're using active cookie consent, consider creating a second fallback experience with no targeting conditions that can be served to opted-out users.
Best practices: What should you measure?
- Consult your Customer Success Manager regarding personalization and testing efforts that best measure the value of your personalization program.
- Don't include experiences that have no effect on the user's experience.
For example, firing an event or implementing a cookie. - Don't include experiences that must be served and are not optimized.
For example, cookie consent notification.
Note: As with any A/B test, there's a small opportunity cost inherent to reserving a small group that isn't served Dynamic Yield experiences. However, the tradeoff of the cost is that you can measure the impact of your personalization program.
Step 2: Activate the Experience OS Impact report
Go to the Experience OS dashboard, and click Experience OS Impact.
The first time you enter the report, an onboarding page appears. Follow the instructions and make sure to target all the campaigns that are part of your personalization program (if you followed the instructions in Step 1, this is already done). Next, click Generate Report. The report will be available the next morning (by 9 AM).
Important: Don’t activate the report until after you target the relevant campaigns. Activating the report prematurely will affect your measurement because the result will reflect an A/A test and not the true impact of your personalization efforts.
Note: If you previously measured the impact of personalization using the old global control group test setup with an evaluator (for script-based campaigns), follow the instructions to upgrade to the new global test setup and start using the dedicated Experience OS Impact report.
Step 3: Start measuring the impact of your personalization efforts
When the report is active, add your business’ relevant key KPIs and use the measurements in the report to understand the direct impact of your personalization efforts. Then, share the insights with your organization and take action to improve your program even further.
When adding new campaigns and experiences in Experience OS, follow the guidelines in Step 1 and consider adding them to the global test. You can use the campaign list in the Experience OS Impact report to get an overview of which campaigns are part of the test and which are not.
The Experience OS Impact report
The Experience OS Impact report is a dedicated report designed to measure the impact of personalized experiences on your overall performance.
As in the Experience report, you can add more KPIs available on your site and key metrics for your business (up to 10 different metrics). The first metric in the list is considered the primary metric and can be changed at any time.
Interpreting the results:
Each metric included in the report compares the performance of the users in the Dynamic Yield Experiences group to those in the Global Control group in the selected period.
Gain | The additional units or value contributed by users in the Dynamic Yield Experiences group are typically called the incremental metric. The incremental metric is calculated by multiplying the number of users in the Dynamic Yield Experiences group by the difference between the normalized metrics of the two groups. |
Uplift | The ratio of the normalized metric of the Dynamic Yield Experiences group vs the Global Control group, minus 1. |
Probability to Be Best |
Represents the chance of the variation (in this case, the Dynamic Yield Experiences group) to outperform the Global Control group. In measuring overall impact, the Probability to Be Best provides credibility if your personalization program yields conclusively better results than without it. If the Dynamic Yield Experience group is declared the winner, a green trophy appears next to the metric’s name. If the Global Control group is the winner, an orange trophy is displayed. A winner is declared if:
Note that the first day of the test must be included in the report’s selected time frame to get the Probability to Be Best value. |
Working with timeframes
The report initially covers the entire test duration ("Overall"). To focus on a specific timeframe, adjust the selector at the top of the report. You can select any period within the test duration by selecting Custom Range, or use the predefined periods of Year to Date or Last Year.
In contrast to the Experience Report, custom timeframes include all users who visit your site and their activity, rather than only new users. This provides results based on all site traffic.
Note: The Probability to Be Best value is available only when you select the first day of the test.
The Over Time bar chart
Analyze the gain and uplift of each metric in smaller time increments and see the changes over time. Select the time unit that best fits your requirements and group the data by week, month, or quarter.
The bars in the graph represent the metric gains added during each period. The summary of the entire period appears above the chart. Note that the overall metric gain is the sum of all the periods, but the overall uplift can't be summed up the same way.
Understand which campaigns are measured in the global test
In the report, all your campaigns are divided into three groups:
- Dynamic Yield Experiences: Campaigns that are part of the global test and served only to users who belong to the Dynamic Yield Experiences group.
- Global Control: Campaigns that are part of the global test and served only to users who belong to the Global Control group.
- Unmeasured: Campaigns that aren’t part of the global test and are served to all users (according to their targeting condition).
Use this overview to ensure all the campaigns in your personalization program are measured and see which campaigns are influencing the global test performance.
Executive spotlight
Easily share a summary of the global test results for the chosen primary metric. Changing the selected timeframe and the primary metric generates a new summary.
Export results
Click to export the report data to a CSV file to further analyze it in a different platform or share it externally.
The export includes the daily cumulative data of all the metrics in the report. Each row represents a date and the snapshot of the results on that date, based on the data collected from the first day of the test.
The export file structure:
Column Name | Description |
---|---|
Section ID | Your site ID. |
Date | The run date of the aggregation jobs. The data in this row is based on all data collected from the beginning of the test until this date. |
Variation | Dynamic Yield Experiences or Global Control group |
Users | Cumulative number of distinct users until the specified date. |
Alternative Normalization Type | The type of the denominator used to normalize the metric in case it's not by user count. In the Experience OS impact, the only relevant metric is AOV, and the alternative normalization is Purchases. |
Alternative Normalization | Relevant to AOV metric only, this is the cumulative count of purchases until the specified date. |
Alternative Normalization Excluding Outliers | Relevant to AOV metric only, this is the cumulative count of purchases until the specified date, excluding outliers. |
Metric Name | The name of the measured metric. |
Metric Totals | The cumulative count of events, or the sum of the events' value for the specific metric until the specified date. |
Metric Totals Excluding Outliers | The cumulative count of events, or the sum of the events' value for the specific metric until the specified date, excluding outliers. |
Gain | The additional units or value contributed by users in this variation. |
Gain Excluding Outliers | The additional units or value contributed by users in this variation, excluding outliers. |
Uplift |
The ratio of the normalized metric of the Dynamic Yield Experiences group vs the Global Control group, minus 1. |
Uplift Excluding Outliers | The ratio of the normalized metric excluding outliers of the Dynamic Yield Experiences group vs the Global Control group, minus 1. |
Probability to Be Best | The probability of the variation to outperform the other variation. |
Probability to Be Best Excluding Outliers | The probability of the variation to outperform the other variation, based on data excluding outliers. |
Declaration | Whether or not the variation was declared a winner. |
Declaration Excluding Outliers | Whether or not the variation was declared a winner, based on data excluding outliers. |
Upgrade from the Global Control Group setup to Experience OS Impact
Previously, the most common way to set up a global test was by injecting JavaScript code with an evaluator to split all users into the Dynamic Yield Experiences and Global Control groups. This implementation was relevant only for script-based campaigns.
If you're using this setup to measure your personalization program and want to upgrade from the previous system to the new Experience OS Impact report, do one of the following:
- Go over all the campaigns that are currently part of your Global Control Group test and replace the existing evaluator-based targeting condition with Global Test.
- Copy the following JavaScript code that replicates the logic of the new targeting condition, and replace the code in your existing Global Control Group evaluators:
function gcgCheck() {
const HIGH_BIT = 0xffffffff;
const MACHINE_MASK = 0xff;
const DY_EXP_GROUP = 'Dynamic Yield Experiences';
const GLOBAL_CONTROL_GROUP = 'Control Group';
const GCG_GROUP_VALUE = 4;
const CGC_TOTAL_GROUPS = 100;
const id = DY.dyid;
return new Promise((resolve, reject) => {
let highBytes;
if (window.BigInt) {
const i64 = BigInt(id);
const n32 = BigInt(32);
highBytes = i64 >> n32;
lowBytes = i64 - (highBytes << n32);
highBytes = Number(highBytes);
resolve(highBytes);
} else {
const l = DYO.Long.fromString(id);
highBytes = l.high;
resolve(highBytes);
}
}).then(highBytes => {
const cycle = highBytes >>> 20;
const group = cycle % CGC_TOTAL_GROUPS;
const groupName = group > GCG_GROUP_VALUE? DY_EXP_GROUP : GLOBAL_CONTROL_GROUP;
return groupName;
});
}
The second option is a fast and effective solution if you want to start using the Experience OS Impact report immediately. Over time, we recommend replacing the targeting condition in all measured campaigns to the new Global Test targeting condition, as it will enable you to:
- Start measuring users' performance beginning with their first pageview.
- Measure the impact of API campaigns.
You must implement one of the upgrade options before activating the Experience OS Impact report. The new report splits users by their DYID and not using a cookie injected via campaign. Activating the report before completing these steps will harm the validity of the results in the report.
Note: After you complete the upgrade to the new Experience OS Impact report, Global Control Group campaigns currently running with the old implementation will begin behaving like an A/A test. This is because of the different method of splitting users into the 2 groups and is relevant to any previous implementation you might have used. To prevent data from getting mixed with the current test, we recommend pausing the previous tests.