Loading data from the file on S3 to Redshift - 7.3

Amazon Redshift

Version
7.3
Language
English
Product
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Real-Time Big Data Platform
Module
Talend Studio
Content
Data Governance > Third-party systems > Amazon services (Integration) > Amazon Redshift components
Data Quality and Preparation > Third-party systems > Amazon services (Integration) > Amazon Redshift components
Design and Development > Third-party systems > Amazon services (Integration) > Amazon Redshift components
Last publication date
2024-02-21

Procedure

  1. Double-click tRedshiftBulkExec to open its Basic settings view on the Component tab.
  2. In the Host field, press Ctrl + Space and from the list select context.redshift_host to fill in this field.
    Do the same to fill:
    • the Port field with context.redshift_port,

    • the Database field with context.redshift_database,

    • the Schema field with context.redshift_schema,

    • the Username field with context.redshift_username,

    • the Password field with context.redshift_password,

    • the Access Key field with context.s3_accesskey,

    • the Secret Key field with context.s3_secretkey, and

    • the Bucket field with context.s3_bucket.

  3. In the Table Name field, enter the name of the table to be written. In this example, it is person.
  4. From the Action on table list, select Drop table if exists and create.
  5. In the Key field, enter the name of the file on Amazon S3 to be loaded. In this example, it is person_load.
  6. Click the [...] button next to Edit schema and in the pop-up window define the schema by adding two columns: ID of Integer type and Name of String type.