tRedshiftOutputBulkExec Standard properties - 7.1

Amazon Redshift

author
Talend Documentation Team
EnrichVersion
7.1
EnrichProdName
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Real-Time Big Data Platform
task
Data Governance > Third-party systems > Amazon services (Integration) > Amazon Redshift components
Data Quality and Preparation > Third-party systems > Amazon services (Integration) > Amazon Redshift components
Design and Development > Third-party systems > Amazon services (Integration) > Amazon Redshift components
EnrichPlatform
Talend Studio

These properties are used to configure tRedshiftOutputBulkExec running in the Standard Job framework.

The Standard tRedshiftOutputBulkExec component belongs to the Cloud and the Databases families.

The component in this framework is available in all Talend products.

Note: This component is a specific version of a dynamic database connector. The properties related to database settings vary depending on your database type selection. For more information about dynamic database connectors, see Dynamic database components.

Basic settings

Database

Select a type of database from the list and click Apply.

Property Type

Either Built-In or Repository.

 

Built-In: No property data stored centrally.

 

Repository: Select the repository file in which the properties are stored. The database connection fields that follow are completed automatically using the data retrieved.

Use an existing connection

Select this check box and in the Component List click the relevant connection component to reuse the connection details you already defined.

Host

Type in the IP address or hostname of the database server.

Port

Type in the listening port number of the database server.

Database

Type in the name of the database.

Schema

Type in the name of the schema.

Username and Password

Type in the database user authentication data.

To enter the password, click the [...] button next to the password field, and then in the pop-up dialog box enter the password between double quotes and click OK to save the settings.

Additional JDBC Parameters

Specify additional JDBC properties for the connection you are creating. The properties are separated by ampersand & and each property is a key-value pair. For example, ssl=true & sslfactory=com.amazon.redshift.ssl.NonValidatingFactory, which means the connection will be created using SSL.

Table Name

Specify the name of the table to be written. Note that only one table can be written at a time.

Action on table

On the table defined, you can perform one of the following operations:

  • None: No operation is carried out.

  • Drop and create table: The table is removed and created again.

  • Create table: The table does not exist and gets created.

  • Create table if not exists: The table is created if it does not exist.

  • Drop table if exists and create: The table is removed if it already exists and created again.

  • Clear table: The table content is deleted. You have the possibility to rollback the operation.

Schema and Edit schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields.

 

Built-In: You create and store the schema locally for this component only.

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs.

 

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the Repository Content window.

Data file path at local

Specify the local path to the file to be generated.

Note that the file is generated on the same machine where the Studio is installed or where the Job using this component is deployed.

Append the local file

Select this check box to append data to the specified local file if it already exists, instead of overwriting it.

Create directory if not exists

Select this check box to create the directory specified in the Data file path at local field if it does not exist. By default, this check box is selected.

Access Key

Specify the Access Key ID that uniquely identifies an AWS Account. For how to get your Access Key and Access Secret, visit Getting Your AWS Access Keys.

Secret Key

Specify the Secret Access Key, constituting the security credentials in combination with the access Key.

To enter the secret key, click the [...] button next to the secret key field, and then in the pop-up dialog box enter the password between double quotes and click OK to save the settings.

Inherit credentials from AWS role

Select this check box to obtain AWS security credentials from Amazon EC2 instance metadata. To use this option, the Amazon EC2 instance must be started and your Job must be running on Amazon EC2. For more information, see Using an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances.

S3 Assume Role

Select this check box and specify the values for the following parameters used to create a new assumed role session.
  • Role ARN: the Amazon Resource Name (ARN) of the role to assume.

  • Role session name: an identifier for the assumed role session.

  • Session duration (minutes): the duration (in minutes) for which we want to have the assumed role session to be active.

For more information on assuming roles, see AssumeRole.

STS Endpoint

Select this check box and in the field displayed, specify the AWS Security Token Service endpoint, for example, sts.amazonaws.com, where session credentials are retrieved from.

This check box is available only when the Assume role check box is selected.

Region

Specify the AWS region by selecting a region name from the list or entering a region between double quotation marks (e.g. "us-east-1") in the list. For more information about the AWS Region, see Regions and Endpoints.

Bucket

Type in the name of the Amazon S3 bucket, namely the top level folder, to which the file is uploaded.

The bucket and the Redshift database to be used must be in the same region on Amazon. This could avoid the S3ServiceException errors known to Amazon. For further information about these errors, see S3ServiceException Errors.

Key

Type in an object key to assign to the file uploaded to Amazon S3.

Redshift Assume Role

Select this check box and specify the values for the following parameters used to create a new assumed role session.

  • IAM Role ARNs chains: a series of chained roles, which may belong to other accounts, that your cluster can assume to access resources.

    You can chain a maximum of 10 roles.

  • Role ARN: the Amazon Resource Name (ARN) of the role to assume.

For more information on IAM Role ARNs chains, see Authorizing Redshift service.

Advanced settings

Fields terminated by

Enter the character used to separate fields.

Enclosed by

Select the character in a pair of which the fields are enclosed.

Compressed by

Select this check box and select a compression type from the list displayed to compress the data file.

This field disappears when the Append the local file check box is selected.

Encrypt

Select this check box to generate and upload the data file to Amazon S3 using client-side encryption. In the Encryption key field displayed, specify the encryption key used to encrypt the file. Note that only a base64 encoded AES 128-bit or AES 256-bit envelope key is supported. For more information, see Loading Encrypted Data Files from Amazon S3.

By default, this check box is cleared and the data file will be uploaded to Amazon S3 using server-side encryption.

For more information about the client-side and server-side encryption, see Protecting Data Using Encryption.

Encoding

Select an encoding type for the data in the file to be generated.

Delete local file after putting it to s3

Select this check box to delete the local file after being uploaded to Amazon S3. By default, this check box is selected.

Date format

Select one of the following items from the list to specify the date format in the source data:

  • NONE: No date format is specified.

  • PATTERN: Select this item and specify the date format in the field displayed. The default date format is YYYY-MM-DD.

  • AUTO: Select this item if you want Amazon Redshift to recognize and convert automatically the date format.

Time format

Select one of the following items from the list to specify the time format in the source data:

  • NONE: No time format is specified.

  • PATTERN: Select this item and specify the time format in the field displayed. The default time format is YYYY-MM-DD HH:MI:SS.

  • AUTO: Select this item if you want Amazon Redshift to recognize and convert automatically the time format.

  • EPOCHSECS: Select this item if the source data is represented as epoch time, the number of seconds since Jan 1, 1970 00:00:00 UTC.

  • EPOCHMILLISECS: Select this item if the source data is represented as epoch time, the number of milliseconds since Jan 1, 1970 00:00:00 UTC.

Settings

Click the [+] button below the table to specify more parameters for loading the data.

  • Parameter: Click the cell and select a parameter from the drop-down list.

  • Value: Set the value for the corresponding parameter. Note that you cannot set the value for a parameter (such as IGNOREBLANKLINES) that does not need a value.

For more information about the parameters, see http://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html.

Config client

Select this check box to configure client parameters for Amazon S3. Click the [+] button below the table displayed to add as many rows as needed, each row for a client parameter, and set the following attributes for each parameter:

  • Client Parameter: Click the cell and select a parameter from the drop-down list.

  • Value: Enter the value for the corresponding client parameter.

JDBC url
Select a way to access to an Amazon Redshift database from the JDBC url drop-down list.
  • Standard: Use the standard way to access the Redshift database.
  • SSO: Use the IAM Single Sign-ON (SSO) authentication way to access the Redshift Database. Ensure that the IAM role added to your Redshift cluster has appropriate access rights and permissions to this cluster. You can ask the administrator of your AWS services for more details.

    This property is available only when Use an existing connection check box is not selected from the Basic settings. Once enabled, the Username and Password properties in Basic settingswill be hidden, and then you should go to Additional JDBC Parameters to specify parameters for access.

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at the Job level as well as at each component level.

Global Variables

Global Variables

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.

Usage

Usage rule

This component is mainly used when no particular transformation is required on the data to be loaded to Amazon Redshift.

Dynamic settings

Click the [+] button to add a row in the table and fill the Code field with a context variable to choose your database connection dynamically from multiple connections planned in your Job. This feature is useful when you need to access database tables having the same data structure but in different databases, especially when you are working in an environment where you cannot change your Job settings, for example, when your Job has to be deployed and executed independent of Talend Studio.

The Dynamic settings table is available only when the Use an existing connection check box is selected in the Basic settings view. Once a dynamic parameter is defined, the Component List box in the Basic settings view becomes unusable.

For examples on using dynamic parameters, see Reading data from databases through context-based dynamic connections and Reading data from different MySQL databases using dynamically loaded connection parameters. For more information on Dynamic settings and context variables, see Talend Studio User Guide.