Data Destinations
This document defines the different destinations where the importer can push the data uploaded by the users. The following is the list of supported destinations.
At a time only one destination can be selected per sheet.
None
The user-uploaded data will not be pushed anywhere. The files will be, however, available for download via the csvbox.io admin.
API / Webhook
The data will be pushed to a webhook endpoint as configured in the sheet settings. You can choose between JSON, XML, and FORM Data formats for receiving data to your webhook. The data will be pushed in chunks of rows. The number of rows per chunk can be configured in the sheet settings.
Sample JSON POST to your API:
The data will come in as HTTP POST requests. Each request will have an array of rows based on the chunk size defined in the sheet settings. You can set the chunk size to 1 to receive 1 record per HTTP request.
If the row count is greater than 10,000 then the chunk size will default to 1000. A large row count having a small chunk size increases the processing time significantly.
Request Type
To push the data to the destination the webhook can be called in two modes:
Sequential - Webhook APIs will be invoked in a sequential order one after another.
Parallel (Default) - Multiple webhook APIs will be invoked concurrently. It means your application will receive chunks of rows in parallel. This method reduces overall import time. However, your application will need to manage the order of data based on the row_number attribute.
If you want to jump in and get started, we recommend testing using webhook.site, to get your webhook URL. For testing on your local machine, we recommend using ngrok.
Amazon S3
The files uploaded by the users can be pushed to the AWS S3 Bucket of your choice. You simply need to select the destination type as 'Amazon S3' and provide the AWS credentials, bucket/folder name, and access policy for storing the files.
The data will be stored as S3 objects with the name {{import_id}}_{{user_id}}.csv where user_id is the custom user attribute that you reference via the setUser
method while installing the importer code. The other 4 custom user attributes will be saved as the user-defined metadata of the S3 object.
The AWS credentials need the following 3 minimum policies for uploading files to S3:
ListBucket policy is required for testing the connection.
PutObject is required to add objects to S3.
PutObjectTagging is required to add the tags (metadata) to the uploaded objects.
S3 Data Type
You have the option to store the data in S3 in either CSV or JSON format.
MySQL
Import CSV files and Excel sheets directly into your MySQL tables. How it works:
Select the destination type as 'MySQL Database'.
Connect your MySQL database by providing the credentials.
Specify table name where you want the data to be pushed.
Map sheet columns to the table column.
You can also map custom attributes to table columns.
The user CSV data will then be directly be appended to the MySQL table.
SQL Server
Import CSV files and Excel sheets directly into your SQL Server tables. How it works:
Select the destination type as 'SQL Server Database'.
Connect your SQL Server database by providing the credentials.
Specify the table name where you want the data to be pushed.
Click the 'Test Connection' button.
If the connection is successful, then click the 'Map Columns' button and match the sheet template columns to the SQL Server table columns.
You can also map custom attributes to table columns.
Select between the following 2 operations:
Insert - The importer will always push the incoming CSV data as new records in the database.
Upsert - The importer will check if the record exists in the database. If the record exists, then it will be updated with the incoming data from the CSV. If the record does not exist, then a new record will be inserted. The record check will be done based on the index keys specified in the mapping modal.
The Upsert operation will be significantly slower than the Insert operation. For the Insert operation, the records can be pushed in chunks. Whereas for the Upsert operation only one record can be processed at a time, and it requires multiple queries.
Google Sheets
Import CSV files and Excel sheets directly into Google Sheets. Here is how it works:
Select the destination type as 'Google Sheets'.
Connect your Google account by clicking the Google button and accepting the relevant permissions.
The importer requires permission to view the list of Google sheets in your account and edit sheet data.
Provide the Google sheet name.
Specify the worksheet name where you want the data to be pushed.
Map the template columns to the Google sheet columns.
You can also map custom attributes to sheet columns.
The user CSV data will then be directly be added to the Google sheet.
Bubble
Import user CSV files and Excel sheets directly into your Bubble app. More information here.
Notion
Import user CSV files and Excel sheets directly into your Notion databases. More information here.
PostgreSQL
Import CSV files and Excel sheets directly into your PostgreSQL tables. How it works:
Select the destination type as 'PostgreSQL'.
Connect your PostgreSQL database by providing the credentials.
Specify the table name where you want the data to be pushed.
Map sheet columns to the table column.
You can also map custom attributes to table columns.
The user CSV data will then be directly be appended to the PostgreSQL table.
Airtable
Import CSV files and Excel sheets directly into your Airtable. Here is how it works:
Select the destination type as 'Airtable'.
Connect your Airtable by providing the credentials.
Specify the table name where you want the data to be pushed.
Map sheet columns to the Airtable table column.
You can also map custom attributes to table columns.
The user CSV data will then be directly appended to the Airtable table.
The data fields from the Airtable will be available in the Map Column modal only if they have data in the first row. You may add dummy data for each data field in the first row in order for them to appear in the Map Column modal.
There are 2 operations available for Airtable:
1. Insert
Creates a new row in the table.
2. Upsert
Update an existing row if a specified value already exists in a table, and insert a new row if the specified value doesn't already exist.
The column to check for uniqueness needs to be selected in the Column Mapping popup.
If zero matches are found, a new row will be created.
If one match is found, that row will be updated.
If multiple matches are found, the request will fail.
Zapier
Import user CSV files and Excel sheets to Zapier. More information here.
Webflow
Import user CSV files and Excel sheets to Webflow. More information here.
FTP Server
The files uploaded by the users can be pushed to your FTP Server. You simply need to select the destination type as 'FTP' and provide the conenction details and the folder name for storing the files.
The data will be stored as CSV files with the name {{import_id}}_{{user_id}}.csv where user_id is the custom user attribute that you reference via the setUser
method while installing the importer code.
Last updated