- 21 Minutes to read
Imports
- 21 Minutes to read
Antavo’s Imports module facilitates the configuration of various entities and mass enrollment of customers in the loyalty program by having settings and data fields imported from CSV or JSON files.
Find the configuration page of the Imports functionality by navigating to the Modules menu and searching for the Import module. The page will open to the list of files that have already been uploaded.
The columns represent the following information:
Created at | Date of the imported file upload |
Created by | Management UI user who uploaded the import file |
Model | The type of data set imported (challenge, coupon pool, coupon, customer field, customer mapping rule, customer, event, historical transaction, product, reward, store, user group) |
Status |
|
Filename | Name of the imported file |
Filesize | The size of the import file is in bytes |
Creating a new import
Imports can be used to create new entity items or modify existing ones in Antavo. If the entity or any of its field values in the input files already exist in the loyalty program, the input file will result in updating the specific entity upon a successful match. The only exception is the Historical transaction import where existing transactions and their related events cannot be re-imported or modified, nor can new events be added to existing transactions.
To begin creating a new import, click the Create new import button on the left sidebar.
Data model
First, select the data model to import and click Next. The available data models are:
Challenges: Create new or update existing challenges that are set up in the Challenges module. Challenge import is executed without including the image of the challenge to facilitate imports more effectively. Please note that images still need to be added when editing a challenge afterward to ensure it can be displayed properly on the membership site.
Coupon pools: Create or update coupon pools in the Coupons module. If the pool uses a set of uploaded coupons as the source of the coupons, ensure to upload them through the Coupon import feature mentioned earlier.
Coupons: Import coupons to be used as uploaded coupons of rewards, friend referral coupons, or coupon pools. Alternatively, coupon imports to coupon pools can also be initiated through the Coupons module.
Customer fields: Import new customer data attributes.
Customer mapping rules: Create new mapping rules to update customers.
Customers: Import new loyalty members or mass update customer data, including labels appended by third-party integrations. Please note, that calculated customer data cannot be imported. If you need to insert such values, contact the Antavo Service Desk.
Events: When creating an event import file, ensure that you always include the customer and action columns, which should contain the ID of the customer and the event in the data rows of the import files respectively.
Historical transactions: Import complete historical transaction records, including related events such as checkout details, item purchases, and partial or full refunds. Please note that the imported transactions cannot be modified or updated later.
The import procedure for historical transactions is different from that of other data models. Please see the details under the Importing historical transactions section of this article.Products: Import new products or update existing products added in the Product catalog module.
Rewards: Create new rewards or update existing rewards configured in the Reward module. Please note that when importing rewards, the process is executed without including the image of the reward to facilitate more efficient imports. Ensure that you upload the reward image(s) through the reward editor interface to properly display them on the membership site.
Stores: Import new stores or update the existing stores configured in the Stores module.
User groups: Import new groups to the User groups module.
Data source
Downloading a template file
To import data, you will need to create an import file. Once a data model is selected, the system provides an interface where template files can be downloaded in both JSON and CSV formats. These templates reflect the file structure supported by the system.
For the Historical transactions data model, it’s important to note that only JSON files are supported for import. Therefore, the corresponding template for this data model is available exclusively in JSON format.
For detailed instructions on how to construct the import file, please refer to the Creating an import file section of this document.
Selecting the data source
Providing a data source and the procedure of importing historical transactions varies from the processes associated with other data models. While other data models support both CSV and JSON formats, the Historical transactions data model supports JSON files exclusively. CSV files cannot be used for this purpose. To learn about the whole import procedure of historical transactions, see the Importing historical transactions section of this document.
In the Source dropdown field, you’ll need to select the source from which Antavo can access the import file. Here are the available options:
Upload: You can upload the CSV or JSON file directly by selecting the file using the Choose file button in the Filename fields. You can download a template CSV file or JSON file containing all the necessary fields by clicking the name of the corresponding file extension.
Please note, that the import of child events, likecheckout_item
, are only supported in JSON format.URL: If the import file is accessible via a URL, you can insert the URL into the Url field.
SFTP or FTPS file transfer protocols: If you're using SFTP or FTPS, you'll need to define the connection details including Host, Port, User, Pass, and Path. Additionally, you can select a Cleanup method to rename or delete the import file after it has been processed.
Fields
If you open the Fields collapsible section, you’ll find the attributes that need to be filled based on the selected data model. The list includes the following details for each field: Key (ID), Name, Data Type (e.g., text, date, select, numeric), Required status, Computed status, Possible values (if applicable), and Description (if provided).
Both built-in and custom attributes are importable and will be added to the list of fields.
Use the Only required fields checkbox to display only the essential attributes of the entity that need to be imported.
If you translate the values of text attributes to different languages on applicable module interfaces, then translations should be imported in separate attributes. The name of translation attributes should be constructed from the ID of the parent attribute and the ISO 639-1 code of the specific language. For example, English, Spanish and French translations of the Name attribute should be added to the import file in attributes like name.en, name.es and name.fr.
When importing coupons, ensure that you add the parent column to the import file and populate the rows with the ID of the coupon pool in which the coupons should be imported.
Once you’ve updated the import file with all the necessary attribute values, remember to click the Upload button at the bottom of the page.
Import setup
If the upload is successfully processed, you’ll receive a success message at the top of the page, indicating that you can proceed with the data import configuration.
After uploading the import file, it will be added to the list of imports on the Imports page with uploaded status. If you prefer not to complete the import configuration immediately after the upload, you can return to it anytime using the Setup button that appears next to the import in the list.
Learn how importing historical transaction records differs from other data imports under Importing historical transactions.
Field settings
Click the + button next to Fields to adjust the headers of your CSV file or and field names used in the JSON file to match the attribute names within Antavo. This step is optional.
From: The column header in your import file.
To: The name of the target attribute to be populated by the From column values of the import file. Please contact the Antavo Service Desk for assistance in defining the field name.
Format: The format of the date values in the data column. It must adhere to the PHP date format.
Default: Empty values in the file will be filled with the default value
Map values: You can replace the values of the data fields in your import file by mapping the current values with the values to be imported to the corresponding Antavo data fields. Contact the Antavo Service Desk if you need assistance in adding values acceptable to specific Antavo data fields (e.g., mapping woman/man values to female/male for importing into the Antavo gender field).
Scheduling
Scheduling allows imports to run at a specified time or be set as recurrent based on the UTC timezone.
By default, CSV or JSON files are imported immediately (None option), but importing can be scheduled by selecting the On-date or Repeated from the dropdown list. Ensure you enter the date in a valid format; otherwise, the import cannot be executed.
On-date: The import starts at a predefined date and time.
Use the date and time selector or enter the details manually in the Date and Time fields.Repeated: Select this option to set the data import to be triggered periodically.
Daily: Set the time of day when the import should start.
Weekly: Select the day(s) of the week and set the time of day.
Monthly: Select the day(s) of the month and set the time of day. You can use the Last value to ensure the import starts on the last day of the month, regardless of the number of days in a given month.
Scheduling of previously configured imports may be modified after creation.
Queued row processing
Enabling the queued row processing option initiates the pre-processing of imported records before insertion into the database. Preloading accelerates the import procedure and highlights any records that cannot be imported.
File format options
CSV file imports
Delimiter
Use the dropdown list to select the value delimiter that is used in the file. The default option is the comma, but you can also choose a semicolon, pipe, or tab.Enclosure
Currently, the double quote is the only option to indicate that a delimiter character is used within a single field value. For example, if a comma is used as the delimiter, the value "news, entertainment" will be processed as a single value in one data field, while without quotes, it would be processed as two separate values
JSON file imports
Object and value in one row
When this option is enabled, only JSON files with object key-value pairs on the same row can be imported.
Module options
Duplication handling
You can decide how to handle data rows that include entities already stored in the Antavo database.Skip: Data rows of existing entities will be skipped, leaving the entity unchanged.
Update: Data rows of existing entities will be processed, and the entity will be updated according to the CSV file data.
Note that when updating a set of uploaded coupons, only unassigned coupons can be updated.
Coupon pool
Use the dropdown list to select the coupon pool previously configured in the Coupons module to which the coupons should be uploaded.You can either select a coupon pool,
or choose the Based on the chosen file option if the ID of the coupon pool has been added as a data column in your import file.
Reward
This option is only applicable if you upload coupons directly from the reward configuration page instead of using a coupon pool created in the Coupons module. Use the dropdown list to select the coupon type-reward previously configured in the Rewards module to which the coupons should be uploaded.You can either select a reward,
or choose the Based on the chosen file option if the ID of the reward has been added as a data column in your import file.
Please note that if you select a coupon pool or reward but add the ID of another pool or reward to the applicable column of the import file, these data column values are ignored, and all the coupon codes will be uploaded to the selected coupon pool or reward automatically.
Click Start import to begin the import process. Once the import has started, the settings (except for scheduling settings) are non-modifiable.
Import summary
Once the process has been initiated, the View tab will be added to monitor the results of the import process.
If queued processing has been enabled, the Log section on the View page is empty, all the record-level results are listed under the Preloaded data tab.
While the Log details the output of the process, the File info summarizes the import file properties.
The Results section shows if the import file was successfully processed and displays the number of:
Entities that were inserted or updated in the database
Duplications handled
Erroneous entries
If the number of errors is a non-zero value, a red errors label is displayed in the Results section indicating that some of the records were not imported. Please check the Log sections to learn more about why the import failed.
Importing historical transactions
File upload
After selecting the Historical transaction data model, upload one or more JSON files by clicking Select Files or by dragging and dropping files from your computer into the field.
There is a link on this page to download a JSON template, which must be followed when creating your JSON file. You can find detailed instructions on creating a JSON file for the import process of historical transactions in the JSON file for historical transaction import section of this document.
The Fields collapsible section is also available for the Historical transaction data model, similar to other data models. This section provides information regarding the various fields used within the model. For a detailed description, please refer to the Fields chapter of this document.
Pre-processing
Click Start upload and validation. Before the historical transaction records can be imported into the database, the uploaded files are pre-processed and validated for syntax or data errors to ensure that only valid, error-free historical data is imported.
During the pre-processing, a message appears next to the Start upload and validation button showing the progress of the pre-processing and validation. The percentage at the end of the row of each file shows the progress of the upload of the specific file.
Once the upload has finished, one of the following icons is displayed at the end of the row of each uploaded file:
✅ - Displayed if no errors were found in the file.🚫 - Displayed if the file contains a JSON syntax error.⚠️ - Displayed if the file contains invalid data (e.g., there is nocheckout
event for a given transaction, or there are two events with the same transaction ID but with a different customer, etc).
During the pre-processing and validation process, only the syntax and data integrity of the import file are validated. Note that the file data is not checked against the database at this stage.
Click Show errors to display detailed information about the flagged errors. The Historical transaction import errors page lists each row containing an error, the related transaction ID, and an error message that allows for easy identification of the place and type of the error.
If critical information about a transaction is ambiguous or missing, all actions related to that transaction are invalidated, and all elements of the transaction are removed from the import.
Before continuing with the import, you can remove a file that contains errors by clicking the X button at the end of its row. After fixing the errors, you can select the updated file, and upload and validate it again.
When you upload and validate one or more JSON files, a historical_transaction
import entry is created on the Imports page. If you cancel the import action, you can go back to the import instance on the Imports page, and proceed with the import by clicking Edit at the end of its row.
Please note that uploaded files are kept for 7 days. After this period, the uploaded files are deleted from the database, and attempting to proceed with the import will be unsuccessful.
Start import
Click Start import to import all uploaded and validated files. A pop-up confirmation message appears with a warning that invalid transactions will be excluded from the import.
Click Start to proceed with the import.
Once the import process has been started, you are redirected to the Historical Transaction page which shows the import details. At the top of the page, a progress bar updated in real-time shows the percentage of processed data. The progress bar disappears once the import has finished. Under Import results statistics, you can find information updated in real-time about:
The number of processed events
The number of processed transactions
The number of encountered errors
The status of the import and the list of imported files are displayed on the right side of the page.
You can access the import details page anytime by clicking the Details button at the end of the row of the import in the Imports list.
During the import process, the file data is checked against the database. Errors encountered during this process are logged under Errors. E.g., if a transaction ID found in the uploaded file already exists in the database, an error is logged, and all events related to the transaction ID are invalidated and removed from the import.
Import results
Imported transactions appear on each customer's Transactions page with an accepted
status. Partially or fully refunded transactions appear with a refunded
status.
On the customer's Events page, events imported with the Historical Transaction import are listed with a Historical Transaction Import
source. The imported historical transaction data can also be retrieved via the Transaction and Events APIs.
Each historical transaction import triggers the recalculation of transaction-related customer attributes such as the average basket value and the total purchase amount to ensure comprehensive, accurate customer profiles.
Please note that currently points cannot be awarded or removed when importing and processing historical transactions, so imported transactions do not modify customers' calculated points. Adjustments to customer points must be made manually.
Checkout events imported with the Historical transaction import can be refunded later by registering a refund
or refund_item
event with the same transaction ID.
Creating an import file
Before you begin, you can download templates from the data source selection page. Both CSV and JSON templates are available to help you structure your import file correctly. It is essential to follow the format of the templates exactly; otherwise, the import will not be successful.
CSV
Header
In the first row of the CSV file, define the name of the data field as the headers of your data columns. You can use the name of the loyalty target fields, but you can map the fields of the import file with the loyalty fields during the import setup via the Field Settings section.
Here is an example of how the header of a reward import appears in the import file without mapping.
status,type,name,description,start_date,end_date,points |
When importing events, ensure that you include both the customer and action columns otherwise the import process will fail.
Rows
After preparing the headers, add the data values in subsequent rows, with each row representing one item to register.
Here is an example of the content of the reward import file, including the headers and the rewards to import.
status,type,name,description,start_date,end_date,points |
inactive,downloadable,reward name,reward description,2021-05-31,2021-06-30,1500 |
There’s no limit on the number of items that can be included in an import file.
JSON
JSON files should be formatted to ensure clarity and consistency, making it easier for the system to process the data accurately. The file should contain a single array of JSON objects. Each object represents an individual data entry and must include all required fields, following the schema of the downloadable template. Ensure the file is encoded in UTF-8 format to support a wide range of characters, and that every key matches the target field names, or map them during the import setup via the Field Settings section.
When importing JSON data (except for historical transactions), the file should follow this structure:
[
{
"status": "active",
"type": "gift_card",
"name": "Example Name translatable",
"description": "Example Description translatable",
"redeem_instructions": "Example Redeem instructions translatable",
"terms": "Example Terms and Conditions translatable",
"claim_button_label": "Example Claim button label translatable",
"image_url": "Example Image URL",
"cost": 100,
"price": 100,
}
]
JSON file for historical transaction imports
When creating a JSON file for Historical transaction imports, it's important to note that the formatting differs from that required for other data models. You must avoid formatting (no indents, extra spaces, or newlines within a single JSON object), and each line should represent a single JSON object, with no additional array or enclosing structure.
The JSON file containing the historical transaction data must follow the syntax and formatting provided in the JSON file template, and must fulfill the following requirements:
The file must not be formatted.
The file must contain one JSON object per transaction action (nested items are supported).
Each JSON object must be on a separate row. The objects must not be separated by commas, and must not be arranged in an array.
The customer ID must be the same for all actions related to the same transaction ID. Transactions containing actions with different customer IDs are invalidated.
Each imported transaction must be complete: all actions related to a specific transaction (e.g.,
checkout
,refund
) must be included in the same import file as it is not possible to use the import to update or overwrite transaction data that has already been imported.
The accepted transaction actions are the following:
checkout
refund
refund_item
partial_refund
Events related to the Checkout accept module (e.g., checkout_accept
and checkout_update
) cannot be imported but transactions with such events can be imported by including a checkout
action with a timestamp
field value that matches the final value of the transaction's auto-accept
attribute.
{"transaction_id":"transaction_1","action":"checkout","total":100,"customer":"example_customer_ID","points_rewarded":100,"timestamp":"2022-06-03T17:39:00+01:00","items":[{"product_id":"product_1","product_name":"Product 001","quantity":2,"subtotal":100,"points_rewarded":100}]}
{"transaction_id":"transaction_1","action":"partial_refund","amount":50,"customer":"example_customer_ID","timestamp":"2022-06-02T17:39:00+01:00"}
{"transaction_id":"transaction_1","action":"refund_item","customer":"example_customer_ID","timestamp":"2022-06-02T17:39:00+01:00","product_id":"product_1","quantity":1}
{"transaction_id":"transaction_1","action":"refund","customer":"example_customer_ID","timestamp":"2022-06-02T17:39:00+01:00"}
If the Multi-accounts module is enabled in your workspace, the template will also include the account field.
Import details
After creating the import, you’ll be automatically directed to the View tab of the import page to monitor its status, visible in the upper right corner. In the Log section, you can review any errors encountered during the import process.
The Result section provides the following insights:
Inserted: Number of records that were added during the import process.
Uploaded: Number of data values that were modified during the import process.
Duplication: Number of data values that were already present in the field.
Errors: Number of records that were not processed due to errors, indicated in the Log section of the page.
You can access the Import details page anytime from the Imports list by clicking the Details button next to the import.
Learn how the details page of the historical transaction imports differs from that of other data imports under Importing historical transactions.
Downloading import files
To download previously imported files, follow these steps:
Navigate to the Imports module page
Click the Details button in the corresponding row
Navigate to the View tab which will display the File info section on the right-hand side
Click on the name of the import file
If there is no name, File not available anymore will appear in place of the file name, indicating that the file is no longer accessible in the database.
The file will be downloaded with the same filename and format as it was previously imported.
To ensure the security of personally identifiable information, Management UI administrators can configure access levels for users regarding downloads under the Roles page.
Inactivate and reactivate repeated imports
You can inactivate repeated scheduled imports anytime by following these steps:
Click the Set inactive button located in the upper right corner
Confirm your choice in the dialog that appears.
Once the import is inactivated, the button is replaced by a Set active action button, which you can use when you decide to reactivate the import again.
Change import schedule
The scheduling of previously created scheduled imports can be updated through the Import page by following these steps:
Navigate to the Imports module page
Click the Details button in the corresponding row
Navigate to the Change Schedule tab
Modify scheduling settings as needed
Switching between On-date and Repeated schedule types is not possible.Click Update to save your changes
Please note that changing schedule settings does not apply to On-date scheduled imports that have already run their courses.
Preloaded data
If import queued row processing is enabled, preloaded records are displayed once the import process is initiated and pre-processing is completed.
The following information is displayed for each record:
Row number: The ordinal number of the row.
Status: Status of the record import (pending, finished, failed).
Error: Error occurred during the import process, if any.
Imported at: Date of the import execution.
By clicking the > icon at the beginning of each row, you can access detailed information about the preloaded record:
Field name
Field value
Field target (the value provided in the To field under Field settings, if any)
Using the search bar on the top, items can be filtered by status and row number.
Logs
A Logs tab is automatically added if the import was created with repeated scheduling. The page lists all the runnings of the configured import with the date of running and the current status which can be:
Processing: The import process is in progress.
Success: The import process has finished.
Failed: The export process has failed. Please contact the Antavo Service Desk for further information if the issue persists.
At the end of each row, there is a button to download the import file.
If you choose the turn on the queued processing option during import setup, you can view the preloaded data of each import occurrence by clicking the corresponding Preloaded data button.