The Experience Report report displays the results of your experiences. Results are broken down by variation, but you can also see results for the entire experience for Dynamic Allocation tests. You can view the impact the experience had on primary and secondary metrics, and segment the results by audience.
The structure of the report changes depending on whether the experience is an A/B test, a single variation without a control group, or uses Dynamic Allocation.
Timeframe
A/B test results display the timeframe of the Current Version by default. This is the data that was collected since the test version started, until last midnight. The data is updated daily at 9 am in the timezone of the site. This is the most accurate way to look at your A/B test results. If there are any previous versions of the test, you can select them from the dropdown to change the view. Alternatively, you can view results for custom time frames, with convenient presets such as Last 7 days or Yesterday. For user-based tests, custom time frames will estimate the number of Unique Users during the timeframe, so we recommend looking at the entire test version before acting on the test results.

Total/Optimization Performance
This section appears only if the experience is using Dynamic Allocation. It shows the overall performance of the experience, regardless of the variation that was served. For example, if you have a hero banner that is serving multiple variations, optimizing for CTR, this view will show you the overall performance of the banner element, aggregating impressions and clicks for all the variations which were chosen by Dynamic Yield to populate the banner.
If you use a control group, it also offers a comparison between the sessions that were served with a control group and the ones served with Dynamic Allocation, measuring the impact of the automatic allocation that is managed by Dynamic Yield, vs. the static control group.

Over-Time Results
The graph shows over-time results for your primary metric, and a set of fixed metrics (purchases, impressions, users, AOV,). You can view metrics in one of 2 modes:
- Daily results: the performance on each individual day.
- Cumulative results (only in A/B tests): each point in the graph shows you the data from the beginning of the test version until that day.

The Variation Performance table displays the performance of each variation. The metrics in this table are:
- Users (only if variation stickiness is Sticky for the User): Unique users that were served the variation. Learn more about users.
- Session (only if variation stickiness is Sticky for the Session): number of sessions in which the variation was served to a user. If a user saw the variation multiple times within the same session, the session will be counted only once. Learn more about sessions.
- Impressions: Number of times the variation was served.
- Number of times the event was triggered (if the metric is not revenue-based): Number of times users performed the required action (e.g. subscriptions, add to cart, purchase)
- Revenue of the conversion (if the metric is revenue-based): Total revenue associated with all the conversions generated by the users which were served the variation.
- Normalized results: the conversion or revenue attributed to each variation, divided by the number of users, sessions, impressions (according to Stickiness and Primary Metric). This metric allows you to compare the performance of different variations fairly, normalizing results by the actual exposure they received (i.e. not giving an advantage to variations that were served more).
- Uplift: The ratio between the normalized results of each variation vs the control group, minus 1. This metric appears only if there is a baseline variation - either a control group, or if there's no control group, a variation that is chosen as a baseline. You can choose which variation to use as a baseline above the Variation Performance table.
- Probability to Be Best (only if there are multiple variations): The chance for each variation to outperform all other variations in the long run. It is a calculation that takes into account the difference in the performance of each variation and the statistical confidence we have in the results. This is the most actionable metric of your in A/B test results, as it defines when results are conclusive and you can apply the winner to all traffic. The Industry standard is to treat 95% Probability to Be Best as conclusive.

Secondary Metrics
We recommend looking at the secondary metrics as well before acting on an experience report. While the primary metric uplift might be positive, it is always a good practice to see how other metrics that might be important to you were impacted. For example, a test can have a positive impact on "Add to Cart", but a negative one on revenue, AOV, or subscriptions. In this case, you might want to stop the test without applying the variation.
You can view the following metrics as secondary metrics:
- CTR (and clicks) in campaign types that inject HTML
- Pageviews
- Purchases
- Revenue
- AOV (average order value)
- Any of your custom events
In A/B test reports you can view up to 10 secondary metrics at a single view. In non-A/B test reports you can add metrics as table columns - one secondary metric at a time.
Audience Breakdown
In the Variation Performance table, you can break down results by audiences to see how variations perform across different user segments. For example, a variation may perform best for all visitors, but perform worse for a specific audience (e.g. Mobile Users). Segmentation may help you determine whether a variation that did not win on the overall population, should still be served to a particular audience.

Audience Breakdown Calculation
The factors for determining when a user is considered part of an audience for each variation depend on the type of allocation method:
For experiences using the A/B Test allocation: User data will appear in the report only if the user belonged to the selected audience at the moment of the user's first impression of the variation in the test version. If the user entered the audience after viewing the variation for the first time – data related to this user will not appear in the breakdown.
Note: Audience breakdown is only available for:
- the current version or previous version timeframes of A/B tests with multiple variations
- the last 30 days (or shorter) timeframes for Dynamic Allocation and A/B tests with one variation
For experiences using Dynamic Allocation: User data will appear in the report if the user belonged to the audience at the moment of the relevant interaction (e.g., impression, click, purchase).
Outlier Handling
If Dynamic Yield detects that the value of a revenue-based event is unlikely higher or lower than usual, the event is labeled as an outlier, and is excluded from A/B test results. This prevents individual anomalous events from skewing results and incorrectly predicting average future user behavior. You can show/hide outliers in your reports using this option. For more details, see Outlier Handling.
Exporting a Report to CSV
You can export report data to CSV to share externally or manipulate results in another platform. Click the export button
at the top right of the report.
- Full report: A CSV version of the report, including all secondary metrics.
- Revenue event log (only in A/B test reports): A log that includes all revenue events including information about which events are outliers. This option is only available when viewing a test version (current or previous) of an A/B test report.
Note: Report is exported only if the test version includes at 100 users.
Frequently Asked Questions
Which pageviews are counted in the Pageviews metric?
The pageviews metric counts pageviews that are attributable to a given variation. For every user, a pageview is counted during the attribution window chosen when configuring the experience.
Why am I seeing discrepancies between reports using “Current Version” and Custom Time over the same time period?
The main reason is that when using custom time frames, the number of unique users and sessions are estimated due to performance considerations. Estimations are done via with an algorithm called HyperLogLog, which can have a margin of error of around 2%. When running an A/B test, where accuracy is critical, the number of unique users and sessions is calculated exactly, to ensure that test results are based on the most accurate data.
How is Uplift calculated?
The uplift compares the performance of each variation to the control group. It is only relevant if a control group or a baseline variation is defined. The calculation is as follows: (variation performance / control group or baseline performance) -1. For example, if the AOV for a variation is $45, but only $30 for the control group, the calculation is as follows: (45/30)-1=0.5, or an uplift of 50%.
Why don't I see information about clicks or CTR?
The CTR metric only appears in campaign types that render HTML (e.g. Recommendations, Dynamic Content, Overlay). This is because CTR measures clicks on the HTML element that is rendered, therefore campaign types which do not render any HTML (e.g. Custom Code, Visual Edit) cannot track clicks. If you want to measure clicks for one of these campaign types, you can fire a custom event upon clicking the element you want to track clicks for.
Why am I missing users when comparing Audience Breakdown vs. the overall population?
In A/B test reports, when you breakdown results by audiences, you might notice that some users were not segmented into any audience, usually around 2%-5% of the overall population. This is happening because users must be part of the audience when having the first impression of the variation. What can cause this:
- If the variation is redirecting the user to another page immediately after it is served. This can cause a lot of missing data in audience breakdown, up to 100% of the data. Learn how to avoid it using this guide to Split/Redirect Tests.
- If users left the site or the page immediately after receiving the variation.
- In device type audiences: if the device is unknown (not a smartphone, tablet or desktop), their data will not appear in any of the audiences.
Notice: "All Users" data includes the data of all the users, even if their data does not appear in audience breakdown.
Why do certain types of experiences have a "Current Version" time frame and other not?
Experiences that are running an A/B test (i.e. A/B test allocation with at least 2 variations or a variation and a control group) have a current version view. It is the most accurate reflection of the test results. Other experiences do not have a version view, with a default time frame of "Past 7 Days". Either way, in all experiences you can use the date picker to extend the timeframe, or pick any custom date in the past.