While Dynamic Yield segmentation capabilities are extensive, you can enrich them even further by onboarding additional user data (such as CRM). This enables you to target experiences and segment users according to attributes that are unique to your business (for example, a particular VIP status).
You can upload batch data, nightly, in CSV format, as described in this article; or, you can use our on-demand User Data API. The API is the best choice when you need real-time data activation, and the number of users that need to be updated is not large.
Batch upload
How it works:
- A user data file is uploaded on a daily basis, which includes a list of users (with an identifier) and their attributes (such as age group).
- After the user data file is uploaded, you can see new audience conditions based on the attributes included in the file.
- When a user visits your site and identifies (even once, and even before the user data file is onboarded), Dynamic Yield connects the user identity in the file to the device they are browsing from. If they match audience conditions based on these attributes, they are included in the audience.
Onboarding process
The data onboarding process is done by your Technical Account Manager, but it requires data preparation on your side. This is the process:
Step 1: Prepare a schema, or modify an existing one
Use the User Data Schema Creator to create or modify your JSON code, and then send it to your account manager.
Step 2: Determine a common identifier
Each record of user data should include a unique identifier, in order to match between a first-party data user and an online user. The recommended unique identifier is the email address (SHA256 hash of the lowercase email address). For privacy reasons, do not use plain text PII (such as name, phone number, or email address) as the identifier.
Make sure that the Identify, Login, Signup, and Newsletter Subscription events use the same identifier.
Optional: Any additional identification instance by the visitor (for example, arriving from a newsletter) can be reported to Dynamic Yield by any of the omni-map client-side APIs: Signup, Login, Identify or the Login() server-side API.
Step 3: Provide the schema to your Support, and initiate a daily upload process
Note: if your file exceeds 40 attributes or 100MB, you must split it into multiple files. The following are instructions for large file handling:
- Get credentials to a AWS S3 bucket from your Technical Account Manager to gain access to a dedicated folder to which you can upload your first-party user data.
For example, the path might look like: s3://com.dy-external.crm/Bucket_Id/{Folder name} OR s3://eu-central-1.dy-external.crm/Bucket_Id/{Folder name} for EU sections.
AWS regions where the buckets are created:- US East (N. Virginia - us-east-1)
- Europe (Frankfurt - eu-central-1)
- Set up an upload process that syncs the data in your dedicated folder to guarantee that your data is up to date. Use the following guidelines:
- File type: CSV.
-
File format: UTF-8 encoded. If using non-latin characters (Russian, Japanese, and so on) then the encoding of the CSV file should be UTF-8 with BOM.
- Filename: CRM_data.csv
- Upload frequency: Daily.
- Each daily upload should be added to a folder that has the following name format: “upload_YYYY-MM-DD_HH-mm” (upload_2021-02-28_00-03).
- The daily data dump can include only:
- One file.
- A full snapshot of your first-party data. Python Example:
id = "replace this with your access key" secret = "replace with your secret key" if __name__ == '__main__': import boto s3_connection = boto.connect_s3(aws_access_key_id=id, aws_secret_access_key=secret, is_secure=False) bucket = s3_connection.get_bucket('com.dynamicyield.crm', validate=False) key = bucket.get_key('/test', validate=False) key.set_contents_from_string('test content') print key.get_contents_as_string() rs = bucket.list(prefix='/') for i in rs: print i
When these steps are complete, the data is processed by the Dynamic Yield batch processes. Upon completion, the relevant conditions appear in your account. Data processing time can vary. Follow up with your Technical Account Manager to find out when to expect your CRM Data conditions to be available.
It is important to update the data frequently.
This ensures that targeting is up to date. When your data is not the latest, users with changed CRM data might not be targeted for relevant experiences, and users who were recently added to the CRM do match any condition.
If the feed is not updated for more than 10 days, an alert appears on your dashboard.
If the feed is not updated after a year, all data will be deleted. This is due to GDPR regulation.
Large file handling
The maximum supported file size is 100 MB. Larger files must be split into multiple files of about 40 MB. The structure of the files must be identical, and they must be uploaded to the same daily folder. Each file must contain information about different users.
When working with one file, name it CRM_data.csv. When working with multiple files, enumerate the file names as follows, and make sure the last file name is CRM_data.csv, as this file name triggers the ingestion:
-
CRM_data_part_1.csv
-
CRM_data_part_2.csv
-
CRM_data_part_3.csv
-
...
-
CRM_data.csv (last file that triggers the ingestion)
Include the headers in each one of the files.
For more information, speak to your Customer Success Manager.
Creating audiences based on first-party data
Once your data has been onboarded, you can immediately use the Audience Explorer to explore these valuable segments and analyze their performance in comparison to your entire online user base, complemented with users’ behavioral data captured by Dynamic Yield. These conditions can be coupled with any other Dynamic Yield condition available in your account and used to create and save micro-targetable segments (audiences) that can be incorporated into your Dynamic Yield Site Personalization experiences and Recommendations.