Dynamic Yield allows you to fine-tune the serving behavior, conversion attribution, and A/B test settings of all variations in each experience. Not all settings are relevant for all campaign types, and the A/B test options do not appear at all unless your experience is using the A/B test allocation method and has more than one variation.
To access the experience settings, click the settings icon when creating or editing an experience.
The following settings are displayed:
A/B Test Options
The following settings will only be visible if you define the allocation method as A/B Test and define more than one variation. If you’re running an A/B test, and enough data was collected, a winning variation will be declared. The winning variation is chosen after it has shown that it is the best variation with high statistical validity, and it is safe to expect it to remain that way for the long run. These settings affect how the winning variation is declared.
Winner Significance Level
This setting defines the minimum Probability to Be Best score required for a variation to be declared as the winner. You can set a value between 75% and 99% (the default is 95%). A higher significance level reduces the chance of false declaration at the expense of requiring more data to be collected.
Minimum Test Duration
A winning variation will not be declared until this duration was passed since the test started, even if one of the variations reached the required statistical validity. This will make sure assumptions are not made about all users based on weekend visitors, or any other time-related traffic. We recommend running a test for 2 weeks before declaring a winner.
By default, the person who created the experience will be notified by email once a winning variation is declared. However, you can change it, by adding/removing email addresses.
Variation stickiness determines whether the initial variation served to the visitor will continue to be served in future interactions with the experience, or whether the system will make new decisions about which variation to serve. There are three options:
- Not sticky: Every time the user is eligible for an experience, the system selects a variation regardless of previous selections.
- Sticky for the session: A single variation is selected for the user for the duration of their session, even if there are multiple interactions with the experience. For example, if a user returns to a page with a Dynamic Content banner in a single session, the same variation is served.
- Sticky for the user (multi-session): A single variation is chosen for the visitor. For the entire duration of the campaign, the same variation is served to each user.
The configuration of ‘stickiness’ affects the metrics of the report. For example, if you want to optimize purchases, and the experience is set to ‘Sticky for the session’, the primary metric becomes Purchases per Session. If the variation is set to ‘Sticky for the user’, the primary metric becomes Purchases per User.
Click-through rate (CTR) is not affected, however, as it always reports Clicks per Impression.
Depending on your traffic allocation method, there are two rules of thumb to follow when selecting the variation stickiness:
- A/B Test: When running an A/B/n test (e.g., layout change test), it is recommended to select ‘Sticky for the user’ and serve the visitor with the same variation thought the entire duration of the test. Using ‘Sticky for the session’ in such cases may increase the sample size (since there are more likely sessions than there are visitors) but may also cause the potential to skew the attribution, especially in cases where the conversion is made after multiple sessions. For example, a visitor may add a product to cart after being served with variation A but make the purchase a few days later after being served with variation B. The system will attribute the purchase to the last served variation, and will not factor into account previously seen variation that may (or may not) impact visitor behavior. For this reason, no stickiness at all is not available for A/b tests.
- Dynamic Allocation: When you choose to allocate your traffic dynamically, it is recommended to select ‘Sticky for the session’. This enables the Dynamic Yield engine to study the performance of the variation throughout the session and provide the best-performing variation for subsequent sessions (for all visitors). If there is no need to keep a consistent user experience, or if your primary metric provides immediate feedback (CTR, for example), you can use the ‘Not sticky’ option. ‘Sticky for the user’ option is not supported for Dynamic Allocation because it defies the purpose of optimization. There's no reason to maintain a losing variation just because it was previously served.
The attribution window is the time period for which conversions and revenue from a visitor are attributed to a given variation in the experience reports. For example, if a user clicks a variation on Monday, and they make a purchase on Tuesday, the purchase is attributed to the variation if the window is 1 day or greater.
Beginning of Attribution Window
In most cases, we recommend starting the attribution window when the user is served with a variation. However, there are times that you might want to fine-tune this trigger:
- Variation is served (default): The attribution window begins as soon as the visitor is served a variation, even if the variation is “below the fold” and the user didn’t actually see it.
Variation is served and clicked: The attribution window begins when the variation is clicked (not applicable for Custom Action, Visual Edit, and Mobile Variable Sets). When choosing this attribution setting, the experiment results display:
- All Users, Sessions, Clicks, and Impressions, regardless of whether the attribution click happened.
- Pageviews, Events, and Revenue metrics that happened after click.
Variation is served and an event is triggered: The attribution window begins when a custom event is triggered. You can specify any event to trigger the beginning of the attribution window. When selecting this attribution setting, the experiment results display:
- All Users, Sessions, Clicks, and Impressions, regardless of whether the attribution event happened.
- Pageviews, Events, and Revenue metrics that occured after the attribution event (including the attribution event itself).
Use the Variation is served and clicked and Variation is served and event is triggered options when a small number of visitors are expected to participate in the test. For Example, when the test affects an element low on the page and users are not likely to scroll down to see it.
Ending of Attribution Window
- Session ends: Conversions and revenue are credited to a variation as long as they occurred after the beginning of the attribution window and before the end of the session.
- 1 Day: Conversions and revenue are credited to a variation as long as they occurred after the beginning of the attribution window and before one day has passed.
- 7 Days: Conversions and revenue are credited to a variation as long as they occurred after the beginning of the attribution window and before a 7-day time period has passed.
- Select Session ends when you expect most conversions to happen within the same session. It reduces the noise of conversions that occur on different days, where the variation might not have even been served. Choosing session attribution expedites reaching statistical significance.
- Select 7 Days if you expect a large number of conversions to happen in different sessions. For example, if you are selling refrigerators, it is likely that your visitors will make a purchase decision after multiple visits, price comparisons, and reviews. Using the Session Attribution Window might not cover subsequent conversions. Consequently, the ‘lost’ data slows the time it takes to reach statistical significance.
The attribution window can be broken when checkout involves a redirect to another domain (such as PayPal, Shop Pay, or the like), so the best practice is to use something other than Session ends in this case.