tRedshiftOutputBulk properties - 6.1

Talend Components Reference Guide

English (United States)
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for Data Quality
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Real-Time Big Data Platform
Talend Studio
Data Governance
Data Quality and Preparation
Design and Development

The tRedshiftOutputBulk and tRedshiftBulkExec components can be used together in a two step process to load data to Amazon Redshift from a delimited/CSV file on Amazon S3. In the first step, a delimited/CSV file is generated. In the second step, this file is used in the INSERT statement used to feed Amazon Redshift. These two steps are fused together in the tRedshiftOutputBulkExec component. The advantage of using two separate steps is that the data can be transformed before it is loaded to Amazon Redshift.

Component family

Databases/Amazon Redshift



This component receives data from the preceding component, generates a single delimited/CSV file and then uploads the file to Amazon S3.


This component allows you to prepare a delimited/CSV file that can be used by tRedshiftBulkExec to feed Amazon Redshift.

Basic settings

Data file path at local

Specify the local path to the file to be generated.

Note that the file is generated on the same machine where the Studio is installed or where the Job using this component is deployed.


Schema and Edit schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Since version 5.6, both the Built-In mode and the Repository mode are available in any of the Talend solutions.



Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.



Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.



Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.


Append the local file

Select this check box to append data to the specified local file if it already exists, instead of overwriting it.


Compress the data file

Select this check box and select a compression type from the list displayed to compress the data file.

This check box disappears when the Append the local file check box is selected.

S3 Setting

Access Key

Specify the Access Key ID that uniquely identifies an AWS Account. For how to get your Access Key and Access Secret, visit Getting Your AWS Access Keys.


Secret Key

Specify the Secret Access Key, constituting the security credentials in combination with the access Key.

To enter the secret key, click the [...] button next to the secret key field, and then in the pop-up dialog box enter the password between double quotes and click OK to save the settings.



Type in the name of the Amazon S3 bucket, namely the top level folder, to which the file is uploaded.



Type in an object key to assign to the file uploaded to Amazon S3.

Advanced settings

Field Separator

Enter the character used to separate fields.


Text enclosure

Select the character in a pair of which the fields are enclosed.


Delete local file after putting it to s3

Select this check box to delete the local file after being uploaded to Amazon S3. By default, this check box is selected.


Create directory if not exists

Select this check box to create the directory specified in the Data file path at local field if it does not exist. By default, this check box is selected.



Select an encoding type for the data in the file to be generated.

S3 Setting

Config client

Select this check box to configure client parameters for Amazon S3. Click the [+] button below the table displayed to add as many rows as needed, each row for a client parameter, and set the following attributes for each parameter:

  • Client Parameter: Click the cell and select a parameter from the drop-down list.

  • Value: Enter the value for the corresponding client parameter.


tStatCatcher Statistics

Select this check box to gather the Job processing metadata at the Job level as well as at each component level.

Global Variables

NB_LINE: the number of rows processed. This is an After variable and it returns an integer.

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.


This component is more commonly used with the tRedshiftBulkExec component to feed Amazon Redshift with a delimited/CSV file. Used together they offer gains in performance while feeding Amazon Redshift.


If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at