Preparing a file and uploading the file to S3 - 7.0

Amazon Redshift

Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Real-Time Big Data Platform
Talend Studio
Data Governance > Third-party systems > Amazon services (Integration) > Amazon Redshift components
Data Quality and Preparation > Third-party systems > Amazon services (Integration) > Amazon Redshift components
Design and Development > Third-party systems > Amazon services (Integration) > Amazon Redshift components


  1. Double-click tRowGenerator to open its RowGenerator Editor.
  2. Click the [+] button to add two columns: ID of Integer type and Name of String type.
  3. Click the cell in the Functions column and select a function from the list for each column. In this example, select Numeric.sequence to generate sequence numbers for the ID column and select TalendDataGenerator.getFirstName to generate random first names for the Name column.
  4. In the Number of Rows for RowGenerator field, enter the number of data rows to generate. In this example, it is 20.
  5. Click OK to close the schema editor and accept the propagation prompted by the pop-up dialog box.
  6. Double-click tRedshiftOutputBulk to open its Basic settings view on the Component tab.
  7. In the Data file path at local field, specify the local path for the file to be generated. In this example, it is E:/Redshift/redshift_bulk.txt.
  8. In the Access Key field, press Ctrl + Space and from the list select context.s3_accesskey to fill in this field.
    Do the same to fill the Secret Key field with context.s3_accesskey and the Bucket field with context.s3_bucket.
  9. In the Key field, enter a new name for the file to be generated after being uploaded on Amazon S3. In this example, it is person_load.