Loading data from the file on S3 to Redshift - 7.1

Amazon Redshift

author
Talend Documentation Team
EnrichVersion
Cloud
7.1
EnrichProdName
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Real-Time Big Data Platform
task
Data Governance > Third-party systems > Amazon services (Integration) > Amazon Redshift components
Data Quality and Preparation > Third-party systems > Amazon services (Integration) > Amazon Redshift components
Design and Development > Third-party systems > Amazon services (Integration) > Amazon Redshift components
EnrichPlatform
Talend Studio

Procedure

  1. Double-click tRedshiftBulkExec to open its Basic settings view on the Component tab.
  2. In the Host field, press Ctrl + Space and from the list select context.redshift_host to fill in this field.
    Do the same to fill:
    • the Port field with context.redshift_port,

    • the Database field with context.redshift_database,

    • the Schema field with context.redshift_schema,

    • the Username field with context.redshift_username,

    • the Password field with context.redshift_password,

    • the Access Key field with context.s3_accesskey,

    • the Secret Key field with context.s3_secretkey, and

    • the Bucket field with context.s3_bucket.

  3. In the Table Name field, enter the name of the table to be written. In this example, it is person.
  4. From the Action on table list, select Drop table if exists and create.
  5. In the Key field, enter the name of the file on Amazon S3 to be loaded. In this example, it is person_load.
  6. Click the [...] button next to Edit schema and in the pop-up window define the schema by adding two columns: ID of Integer type and Name of String type.