tRedshiftBulkExec - 6.1

Talend Components Reference Guide

EnrichVersion
6.1
EnrichProdName
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for Data Quality
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Real-Time Big Data Platform
task
Data Governance
Data Quality and Preparation
Design and Development
EnrichPlatform
Talend Studio

tRedshiftBulkExec properties

The tRedshiftOutputBulk and tRedshiftBulkExec components can be used together in a two step process to load data to Amazon Redshift from a delimited/CSV file on Amazon S3. In the first step, a delimited/CSV file is generated. In the second step, this file is used in the INSERT statement used to feed Amazon Redshift. These two steps are fused together in the tRedshiftOutputBulkExec component. The advantage of using two separate steps is that the data can be transformed before it is loaded to Amazon Redshift.

Component family

Databases/Amazon Redshift

 

Function

This component loads data into a table on Amazon Redshift from a flat file located on Amazon S3.

Purpose

This component allows you to load data to Amazon Redshift from a file on Amazon S3.

Basic settings

Property Type

Either Built-In or Repository.

Since version 5.6, both the Built-In mode and the Repository mode are available in any of the Talend solutions.

 

 

Built-In: No property data stored centrally.

 

 

Repository: Select the repository file in which the properties are stored. The database connection fields that follow are completed automatically using the data retrieved.

Database settings

Use an existing connection

Select this check box and in the Component List click the relevant connection component to reuse the connection details you already defined.

 

Host

Type in the IP address or hostname of the database server.

 

Port

Type in the listening port number of the database server.

 

Database

Type in the name of the database.

 

Schema

Type in the name of the schema.

 

Username and Password

Type in the database user authentication data.

To enter the password, click the [...] button next to the password field, and then in the pop-up dialog box enter the password between double quotes and click OK to save the settings.

 

Table Name

Specify the name of the table to be written. Note that only one table can be written at a time.

 

Action on table

On the table defined, you can perform one of the following operations:

  • None: No operation is carried out.

  • Drop and create table: The table is removed and created again.

  • Create table: The table does not exist and gets created.

  • Create table if not exists: The table is created if it does not exist.

  • Drop table if exists and create: The table is removed if it already exists and created again.

  • Clear table: The table content is deleted. You have the possibility to rollback the operation.

 

Schema and Edit schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Since version 5.6, both the Built-In mode and the Repository mode are available in any of the Talend solutions.

 

 

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

 

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

 

 

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

S3 Setting

Access Key

Specify the Access Key ID that uniquely identifies an AWS Account. For how to get your Access Key and Access Secret, visit Getting Your AWS Access Keys.

 

Secret Key

Specify the Secret Access Key, constituting the security credentials in combination with the access Key.

To enter the secret key, click the [...] button next to the secret key field, and then in the pop-up dialog box enter the password between double quotes and click OK to save the settings.

 

Bucket

Type in the name of the Amazon S3 bucket, namely the top level folder, in which the file is located.

 

Key

Type in the object key assigned to the file on Amazon S3 to be loaded.

Advanced settings

File type

Select the type of the file on Amazon S3 from the list:

  • Delimited file or CSV: a delimited/CSV file.

  • JSON: a JSON file.

  • Fixed width: a fixed-width file.

 

Fields terminated by

Enter the character used to separate fields.

This field appears only when Delimited file or CSV is selected from the File type list.

 

Enclosed by

Select the character in which the fields are enclosed.

This list appears only when Delimited file or CSV is selected from the File type list.

 

JSON mapping

Specify how to map the data elements in the JSON source file on Amazon S3 to the columns in the target table on Amazon Redshift. The valid values are:

  • auto: Map the data by matching object keys or names in the source name/value pairs to the names of columns in the target table. The argument is case-sensitive and must be enclosed in double quotation marks.

  • s3://jsonpaths_file: Map the data using the named JSONPaths file. The parameter must be an Amazon S3 object key that is enclosed in double quotation marks and explicitly references a single file, for example, s3://mybucket/jsonpaths.txt. For more information, see http://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html.

This field appears only when JSON is selected from the File type list.

 

Fixed width mapping

Enter a string that specifies a user-defined column label and column width between double quotation marks. The format of the string is:

ColumnLabel1:ColumnWidth1,ColumnLabel2:ColumnWidth2,....

Note that the column label in the string has no relation to the table column name and it can be either a text string or an integer. The order of the label/width pairs must match the order of the table columns exactly.

This field appears only when Fixed width is selected from the File type list.

 

Compressed by

Select this check box and from the list displayed select the compression type of the source file.

 

Decrypt

Select this check box if the file is encrypted using Amazon S3 client-side encryption. For more information, see Loading Encrypted Data Files from Amazon S3.

 

Encryption key

Specify the encryption key that was used to encrypt the file.

This field appears only when the Decrypt check box is selected.

 

Encoding

Select the encoding type of the data to be loaded from the list.

 

Date format

Select one of the following items from the list to specify the date format in the source data:

  • NONE: No date format is specified.

  • PATTERN: Select this item and specify the date format in the field displayed. The default date format is YYYY-MM-DD.

  • AUTO: Select this item if you want Amazon Redshift to recognize and convert automatically the date format.

 

Time format

Select one of the following items from the list to specify the time format in the source data:

  • NONE: No time format is specified.

  • PATTERN: Select this item and specify the time format in the field displayed. The default time format is YYYY-MM-DD HH:MI:SS.

  • AUTO: Select this item if you want Amazon Redshift to recognize and convert automatically the time format.

  • EPOCHSECS: Select this item if the source data is represented as epoch time, the number of seconds since Jan 1, 1970 00:00:00 UTC.

  • EPOCHMILLISECS: Select this item if the source data is represented as epoch time, the number of milliseconds since Jan 1, 1970 00:00:00 UTC.

 

Settings

Click the [+] button below the table to specify more parameters for loading the data.

  • Parameter: Click the cell and select a parameter from the drop-down list.

  • Value: Set the value for the corresponding parameter. Note that you cannot set the value for a parameter (such as IGNOREBLANKLINES) that does not need a value.

For more information about the parameters, see http://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html.

 

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at the Job level as well as at each component level.

Dynamic settings

Click the [+] button to add a row in the table and fill the Code field with a context variable to choose your database connection dynamically from multiple connections planned in your Job. This feature is useful when you need to access database tables having the same data structure but in different databases, especially when you are working in an environment where you cannot change your Job settings, for example, when your Job has to be deployed and executed independent of Talend Studio.

The Dynamic settings table is available only when the Use an existing connection check box is selected in the Basic settings view. Once a dynamic parameter is defined, the Component List box in the Basic settings view becomes unusable.

For examples on using dynamic parameters, see Scenario 3: Reading data from MySQL databases through context-based dynamic connections and Scenario: Reading data from different MySQL databases using dynamically loaded connection parameters. For more information on Dynamic settings and context variables, see Talend Studio User Guide.

Global Variables

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.

Usage

The tRedshiftBulkExec component supports loading data to Amazon Redshift from a delimited/CSV, JSON, or fixed-width file on Amazon S3, but the tRedshiftOutputBulk component now only supports generating and uploading a delimited/CSV file to Amazon S3. When you need to load data from a JSON or fixed-width file, you can use the tFileOutputJSON or tFileOutputpositional component together with the tS3Put component instead of using the tRedshiftOutputBulk component to generate and upload the file to Amazon S3.

Log4j

If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Loading/unloading data from/to Amazon S3

This scenario describes a Job that generates a delimited file and uploads the file to S3, loads data from the file on S3 to Redshift and displays the data on the console, then unloads the data from Redshift to files on S3 per slice of the Redshift cluster, and finally lists and gets the unloaded files on S3.

Prerequisites:

The following context variables have been created and saved in the Repository tree view. For more information about context variables, see Talend Studio User Guide.

  • redshift_host: the connection endpoint URL of the Redshift cluster.

  • redshift_port: the listening port number of the database server.

  • redshift_database: the name of the database.

  • redshift_username: the username for the database authentication.

  • redshift_password: the password for the database authentication.

  • redshift_schema: the name of the schema.

  • s3_accesskey: the access key for accessing Amazon S3.

  • s3_secretkey: the secret key for accessing Amazon S3.

  • s3_bucket: the name of the Amazon S3 bucket.

Note that all context values in the above screenshot are for demonstration purposes only.

Adding and linking components

  1. Create a new Job and apply all context variables listed above to the new Job.

  2. Add the following components by typing their names in the design workspace or dropping them from the Palette: a tRowGenerator component, a tRedshiftOutputBulk component, a tRedshiftBulkExec component, a tRedshiftInput component, a tLogRow component, a tRedshiftUnload component, a tS3List component, and a tS3Get component.

  3. Link tRowGenerator to tRedshiftOutputBulk using a Row > Main connection.

  4. Do the same to link tRedshiftInput to tLogRow.

  5. Link tS3List to tS3Get using a Row > Iterate connection.

  6. Link tRowGenerator to tRedshiftBulkExec using a Trigger > On Subjob Ok connection.

  7. Do the same to link tRedshiftBulkExec to tRedshiftInput, link tRedshiftInput to tRedshiftUnload, link tRedshiftUnload to tS3List.

Configuring the components

Preparing a file and uploading the file to S3

  1. Double-click tRowGenerator to open its RowGenerator Editor.

  2. Click the [+] button to add two columns: ID of Integer type and Name of String type.

  3. Click the cell in the Functions column and select a function from the list for each column. In this example, select Numeric.sequence to generate sequence numbers for the ID column and select TalendDataGenerator.getFirstName to generate random first names for the Name column.

  4. In the Number of Rows for RowGenerator field, enter the number of data rows to generate. In this example, it is 20.

  5. Click OK to close the schema editor and accept the propagation prompted by the pop-up dialog box.

  6. Double-click tRedshiftOutputBulk to open its Basic settings view on the Component tab.

  7. In the Data file path at local field, specify the local path for the file to be generated. In this example, it is E:/Redshift/redshift_bulk.txt.

  8. In the Access Key field, press Ctrl + Space and from the list select context.s3_accesskey to fill in this field.

    Do the same to fill the Secret Key field with context.s3_accesskey and the Bucket field with context.s3_bucket.

  9. In the Key field, enter a new name for the file to be generated after being uploaded on Amazon S3. In this example, it is person_load.

Loading data from the file on S3 to Redshift

  1. Double-click tRedshiftBulkExec to open its Basic settings view on the Component tab.

  2. In the Host field, press Ctrl + Space and from the list select context.redshift_host to fill in this field.

    Do the same to fill:

    • the Port field with context.redshift_port,

    • the Database field with context.redshift_database,

    • the Schema field with context.redshift_schema,

    • the Username field with context.redshift_username,

    • the Password field with context.redshift_password,

    • the Access Key field with context.s3_accesskey,

    • the Secret Key field with context.s3_secretkey, and

    • the Bucket field with context.s3_bucket.

  3. In the Table Name field, enter the name of the table to be written. In this example, it is person.

  4. From the Action on table list, select Drop table if exists and create.

  5. In the Key field, enter the name of the file on Amazon S3 to be loaded. In this example, it is person_load.

  6. Click the [...] button next to Edit schema and in the pop-up window define the schema by adding two columns: ID of Integer type and Name of String type.

Retrieving data from the table on Redshift

  1. Double-click tRedshiftInput to open its Basic settings view on the Component tab.

  2. Fill the Host, Port, Database, Schema, Username, and Password fields with their corresponding context variables.

  3. In the Table Name field, enter the name of the table to be read. In this example, it is person.

  4. Click the [...] button next to Edit schema and in the pop-up window define the schema by adding two columns: ID of Integer type and Name of String type.

  5. In the Query field, enter the following SQL statement based on which the data are retrieved.

    "SELECT * FROM" + context.redshift_schema + "person ORDER BY \"ID\""
  6. Double-click tLogRow to open its Basic settings view on the Component tab.

  7. In the Mode area, select Table (print values in cells of a table) for a better display of the result.

Unloading data from Redshift to file(s) on S3

  1. Double-click tRedshiftUnload to open its Basic settings view on the Component tab.

  2. Fill the Host, Port, Database, Schema, Username, and Password fields with their corresponding context variables.

    Fill the Access Key, Secret Key, and Bucket fields also with their corresponding context variables.

  3. In the Table Name field, enter the name of the table from which the data will be read. In this example, it is person.

  4. Click the [...] button next to Edit schema and in the pop-up window define the schema by adding two columns: ID of Integer type and Name of String type.

  5. In the Query field, enter the following SQL statement based on which the result will be unloaded.

    "SELECT * FROM person"
  6. In the Key prefix field, enter the name prefix for the unload files. In this example, it is person_unload_.

Retrieving files unloaded to Amazon S3

  1. Double-click tS3List to open its Basic settings view on the Component tab.

  2. Fill the Access Key and Secret Key fields with their corresponding context variables.

  3. From the Region list, select the AWS region where the unload files are created. In this example, it is US Standard.

  4. Clear the List all buckets objects check box, and click the [+] button under the table displayed to add one row.

    Fill in the Bucket name column with the name of the bucket in which the unload files are created. In this example, it is the context variable context.s3_bucket.

    Fill in the Key prefix column with the name prefix for the unload files. In this example, it is person_unload_.

  5. Double-click tS3Get to open its Basic settings view on the Component tab.

  6. Fill the Access Key field and Secret Key field with their corresponding context variables.

  7. From the Region list, select the AWS region where the unload files are created. In this example, it is US Standard.

  8. In the Bucket field, enter the name of the bucket in which the unload files are created. In this example, it is the context variable context.s3_bucket.

    In the Key field, enter the name of the unload files by pressing Ctrl + Space and from the list selecting the global variable ((String)globalMap.get("tS3List_1_CURRENT_KEY")).

  9. In the File field, enter the local path where the unload files are saved. In this example, it is "E:/Redshift/" + ((String)globalMap.get("tS3List_1_CURRENT_KEY")).

Saving and executing the Job

  1. Press Ctrl + S to save the Job.

  2. Execute the Job by pressing F6 or clicking Run on the Run tab.

    As shown above, the generated data is written into the local file redshift_bulk.txt, the file is uploaded on S3 with the new name person_load, and then the data is loaded from the file on S3 to the table person in Redshift and displayed on the console. After that, the data is unloaded from the table person in Redshift to two files person_unload_0000_part_00 and person_unload_0001_part_00 on S3 per slice of the Redshift cluster, and finally the unloaded files on S3 are listed and retrieved in the local folder.