Dynamic Yield enables you to fine-tune the serving behavior, conversion attribution, and A/B test settings of all variations in each experience. Not all settings are relevant for all campaign types, and the A/B test options don't appear at all unless your experience is using the A/B test allocation method and has more than one variation.
To access the experience settings, click the settings icon when creating or editing an experience.
The following settings are displayed:
A/B Test Options
The following settings are visible only if you define the allocation method as A/B Test and define more than one variation. If you’re running an A/B test, and enough data is collected, a winning variation is declared. The winning variation is chosen after demonstrating that it's the best variation with high statistical validity, and it's safe to expect it to remain that way for the long run. These settings affect how the winning variation is declared.
Winner Significance Level
This setting defines the minimum Probability to Be Best score required for a variation to be declared the winner. You can set a value between 75% and 99% (the default is 95%). A higher significance level reduces the chance of false declaration at the cost of requiring more data to be collected.
Minimum Test Duration
A winning variation isn't declared until this time period passes from the beginning of the test, even if one of the variations reached the required statistical validity. This ensures that no assumptions are made about all users based on weekend visitors, or any other time-related traffic. We recommend running a test for 2 weeks before declaring a winner.
Email Notification
By default, the person who created the experience will be notified by email when a winning variation is declared. However, you can change this by adding or removing email addresses.
Advanced Settings
Variation Stickiness
Variation stickiness determines whether the initial variation served to the visitor will continue to be served in future interactions with the experience, or whether the system will make new decisions about which variation to serve. There are three options:
- Not sticky: Every time the user is eligible for an experience, the system selects a variation regardless of previous selections.
- Sticky for the session: A single variation is selected for the user for the duration of their session, even if there are multiple interactions with the experience. For example, if a user returns to a page with a Dynamic Content banner in a single session, the same variation is served.
- Sticky for the user (multi-session): A single variation is chosen for the visitor. For the entire duration of the campaign, the same variation is served to each user.
The configuration of ‘stickiness’ affects the metrics of the report. For example, if you want to optimize purchases, and the experience is set to ‘Sticky for the session’, the primary metric becomes Purchases per Session. If the variation is set to ‘Sticky for the user’, the primary metric becomes Purchases per User.
Click-through rate (CTR) is not affected, however, as it always reports Clicks per Impression.
Best practices
Depending on your traffic allocation method, there are two rules of thumb to follow when selecting the variation stickiness:
- A/B test: When running an A/B/n test (for example, a layout change test), select ‘Sticky for the user’ and serve the visitor with the same variation throughout the entire duration of the test. Using ‘Sticky for the session’ might increase the sample size (because there are probably more sessions than visitors) but it can potentially skew the attribution, especially if the conversion is made after multiple sessions. For example, a visitor might add a product to the cart after being served variation A but make the purchase a few days later after being served variation B. The system will attribute the purchase to the last served variation, and won't factor in previously seen variations that might (or might not) impact visitor behavior. For this reason, you can't run A/B tests with 'Not sticky'.
- Dynamic Allocation: When you allocate your traffic dynamically, select ‘Sticky for the session’. This enables the Dynamic Yield engine to study the performance of the variation throughout the session and serve the best-performing variation for subsequent sessions (to all visitors). If there is no need to keep a consistent user experience, or if your primary metric provides immediate feedback (CTR, for example), you can use the ‘Not sticky’ option. ‘Sticky for the user’ isn't supported for Dynamic Allocation because it defies the purpose of optimization. There's no reason to maintain a losing variation just because it was previously served.
Attribution Window
The attribution window is the time period in which conversions and revenue from visitors are attributed to a given variation in the experience reports. For example, if a user clicks a variation on Monday, and they make a purchase on Tuesday, the purchase is attributed to the variation if the window is 1 day or more.
Start Attribution Window
In most cases, we recommend starting the attribution window when the user is served a variation. However, there are times when you might want to fine-tune this trigger:
- Variation is served (default): The attribution window begins as soon as the visitor is served a variation, even if the variation is “below the fold” and the user didn’t actually see it.
-
Variation is served and clicked: The attribution window begins when the variation is clicked (not applicable for Custom Action, Visual Edit, and mobile variable sets). When choosing this attribution setting, the experiment results display:
- All users, sessions, clicks, and impressions, regardless of whether the attribution click happened.
- Pageviews, events, and revenue metrics that happened after click.
-
Variation is served and an event is triggered: The attribution window begins when a custom event is triggered. You can specify any event to trigger the beginning of the attribution window. When selecting this attribution setting, the experiment results display:
- All Users, Sessions, Clicks, and Impressions, regardless of whether the attribution event happened.
- Pageviews, Events, and Revenue metrics that occured after the attribution event (including the attribution event itself).
Best practice
Use the Variation is served and clicked and Variation is served and event is triggered options when a small number of visitors are expected to participate in the test. For example, when the test affects an element low on the page and users are not likely to scroll down to see it.
End Attribution Window
- Session ends: Conversions and revenue are credited to a variation as long as they occurred after the beginning of the attribution window and before the end of the session.
- 1 Day: Conversions and revenue are credited to a variation as long as they occurred after the beginning of the attribution window and before one day has passed.
- 7 Days: Conversions and revenue are credited to a variation as long as they occurred after the beginning of the attribution window and before a 7-day time period has passed.
Best practice
- Select Session ends when you expect most conversions to happen within the same session. It reduces the noise of conversions that occur on different days, where the variation might not have even been served. Choosing session attribution expedites reaching statistical significance.
- Select 7 Days if you expect a large number of conversions to happen in different sessions. For example, if you are selling refrigerators, it is likely that your visitors will make a purchase decision after multiple visits, price comparisons, and reviews. Using the Session Attribution Window might not cover subsequent conversions. Consequently, the ‘lost’ data slows the time it takes to reach statistical significance.
- The attribution window can be broken when checkout involves a redirect to another domain (such as PayPal, Shop Pay, or the like), so the best practice is to use something other than