While Dynamic Yield segmentation capabilities are extensive, you can enrich them even further by onboarding additional user data (such as CRM). This enables you to target experiences and segment users according to attributes that are unique to your business (for example, a particular VIP status).
You can upload batch data, nightly, in CSV format, as described in this article; or, you can use our on-demand User Data API. The API is the best choice when you need real-time data activation, and the number of users that need to be updated is not large.
Batch upload
How it works:
- A user data file is uploaded on a daily basis, and includes a list of users (with an identifier) and their attributes (such as age group).
- After the user data file is uploaded, you can see new audience conditions based on the attributes included in the file.
- When a user visits your site and identifies (even once, and even before the user data file is onboarded), Dynamic Yield connects the user identity in the file to the device they are browsing from. If they match audience conditions based on these attributes, they are included in the audience.
Onboarding process
The data onboarding process requires data preparation, as follows:
Step 1: Prepare a schema, or modify an existing one
Use the User Data Schema Creator to create or modify your JSON code.
Step 2: Determine a common identifier
Each record of user data should include a unique identifier, to match between a first-party data user and an online user. The recommended unique identifier is the email address (SHA256 hash of the lowercase email address). For privacy reasons, do not use plain text PII (such as name, phone number, or email address) as the identifier.
Make sure that the Identify, Login, Signup, and Newsletter Subscription events use the same identifier.
Optional: Any additional identification instance by the visitor (for example, arriving from a newsletter) can be reported to Dynamic Yield by any of the omni-map client-side APIs: Signup, Login, or Identify; or, the Login() server-side API.
Step 3: Upload your data to your Dynamic Yield site
Note: If your file exceeds 40 attributes or 100MB, you must split it into multiple files and follow the large file handling instructions in this article.
- Navigate to Assets › Data Feeds › New › User Data Feed.
- Name your feed based on the type of data you're uploading, for example, "User Loyalty Points & Tiers".
- Select Upload a CSV file as your Feed Source.
- Click Request credentials to get credentials for an AWS S3 bucket. Copy and store the credentials for future reference. Optionally, click Email Credentials to send them to yourself by email.
Note that you need to save both the path and the key information. - This step is done on your end, by your DevOps or technical team: Upload your user data CSV file to your designated S3 bucket. For example, the path might look like: s3://com.dy-external.crm/Bucket_Id/{Folder name} OR s3://eu-central-1.dy-external.crm/Bucket_Id/{Folder name} for EU sites.
AWS regions where the buckets are created:
- US East (N. Virginia - us-east-1)
- Europe (Frankfurt - eu-central-1)
- This step is done on your end, by your DevOps or technical team: Set up an upload process that syncs the data in your dedicated folder to guarantee that your data is up to date. Use the following guidelines:
- File type: CSV.
-
File format: UTF-8 encoded. If using non-Latin characters (Russian, Japanese, and so on), the encoding of the CSV file should be UTF-8 with BOM.
- Filename: Must end with "CRM_data.csv"
- Upload frequency: Daily.
- Each daily upload should be added to a folder that has the following name format: “upload_YYYY-MM-DD_HH-mm” (upload_2021-02-28_00-03).
- The daily upload can include only:
- One file (if you need more than one due to file size, see large file handling).
- A full snapshot of your user data. Python Example:
id = "replace this with your access key" secret = "replace with your secret key" if __name__ == '__main__': import boto s3_connection = boto.connect_s3(aws_access_key_id=id, aws_secret_access_key=secret, is_secure=False) bucket = s3_connection.get_bucket('com.dynamicyield.crm', validate=False) key = bucket.get_key('/test', validate=False) key.set_contents_from_string('test content') print key.get_contents_as_string() rs = bucket.list(prefix='/') for i in rs: print i
- When your file has been uploaded, go back to the Edit Feed screen, and paste the schema created in Step 1 into the schema editor.
- Below the schema editor, indicate your field delimiter, unique identifier, and identifier type. If the identifier type is Other, specify the type in the Custom Identifier field.
- Click Validate and preview your data. If everything is as expected, click Save and Activate.
When these steps are complete, the data is processed by the Dynamic Yield batch processes. Upon completion, the relevant conditions appear in your account. Data processing time can vary, typically taking up to 24 hours.
It's important to update the data frequently.
This ensures that targeting is up to date. When your data is not the latest, users with changed CRM data might not be targeted for relevant experiences, and users who were recently added to the CRM do match any condition.
If the feed is not updated for more than 10 days, an alert appears on your dashboard.
If the feed is not updated after a year, all data will be deleted. This is due to GDPR regulation.
Large file handling
The maximum supported file size is 100 MB. Larger files must be split into multiple files of about 40 MB. The structure of the files must be identical, and they must be uploaded to the same daily folder. Each file must contain information about different users.
When working with one file, make sure it ends with "CRM_data.csv". When working with multiple files, enumerate the file names as follows, and make sure the last file name ends with CRM_data.csv, as this file name triggers the ingestion:
-
CRM_data_part_1.csv
-
CRM_data_part_2.csv
-
CRM_data_part_3.csv
-
...
-
CRM_data.csv (last file that triggers the ingestion)
Include the headers in each one of the files.
For more information, speak to your Customer Success Manager.
Creating audiences based on user data
When your data is onboarded, you can immediately use the Audience Explorer to explore these valuable segments and analyze their performance in comparison to your entire online user base, complemented by users’ behavioral data captured by Dynamic Yield. The new condition appears under User Properties using the User-Data feed name, and it can be coupled with any other Dynamic Yield condition available in your account and used to create and save micro-targetable segments (audiences) that can be incorporated into your Dynamic Yield Site Personalization experiences and Recommendations.