tTeradataTPump Standard properties - 7.0

Teradata

author
Talend Documentation Team
EnrichVersion
7.0
EnrichProdName
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Real-Time Big Data Platform
task
Data Governance > Third-party systems > Database components > Teradata components
Data Quality and Preparation > Third-party systems > Database components > Teradata components
Design and Development > Third-party systems > Database components > Teradata components
EnrichPlatform
Talend Studio

These properties are used to configure tTeradataTPump running in the Standard Job framework.

The Standard tTeradataTPump component belongs to the Databases family.

The component in this framework is available in all Talend products.

Basic settings

Property type

Either Built-in or Repository .

 

Built-in: No property data stored centrally.

 

Repository: Select the repository file in which the properties are stored. The fields that follow are completed automatically using the data retrieved.

Execution platform

Select the Operating System type you use.

Host

Host name or IP address of the database server.

Database name

Database name.

Username and Password

DB user authentication data.

To enter the password, click the [...] button next to the password field, and then in the pop-up dialog box enter the password between double quotes and click OK to save the settings.

Table

Name of the table to be written. Note that only one table can be written at a time.

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields.

 

Built-In: You create and store the schema locally for this component only.

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs.

 

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

Script generated folder

Browse your directory and select the destination of the file which will be created.

Action to data

On the data of the table defined, you can perform:

Insert: Add new entries to the table. If duplicates are found, job stops.

Update: Make changes to existing entries

Insert or update: Insert a new record. If the record with the given reference already exists, an update would be made.

Delete: Remove entries corresponding to the input flow.

Warning:

It is necessary to specify at least one column as a primary key on which the Update and Delete operations are based. You can do that by clicking Edit Schema and selecting the check box(es) next to the column(s) you want to set as primary key(s).

Where condition in case Delete

Type in a condition, which, once verified, will delete the row.

This field appears only when Delete is selected from the Action to data drop-down list.

Load file

Browse your directory and select the file from which you want to load data.

Field separator

Character, string or regular expression to separate fields.

Error file

Browse your directory and select the destination of the file where the error messages will be recorded.

Advanced settings

Define Log table

This check box is selected to define a log table you want to use in place of the default one that is the database table you defined in Basic settings. The syntax required to define the log table is databasename.logtablename.

BEGIN LOAD

This field allows you to define your BEGIN LOAD command to initiate or restart a TPump task. You can specify the number of sessions to use, the error limit and any other parameters needed to execute the task. The default value is:

SESSIONS 8 PACK 600 ARRAYSUPPORT ON CHECKPOINT 60 TENACITY 2 ERRLIMIT 1000.

For more information, see Teradata Parallel Data Pump Reference documentation.

Return tpump error

Select this check box to specify the exit code number to indicate the point at which an error message should display in the console.

Define character set

Specify the character encoding you need use for your system

tStat Catcher Statistics

Select this check box to collect log data at the component level.

Global Variables

Global Variables

EXIT_VALUE: the process exit code. This is an After variable and it returns an integer.

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.

Usage

Usage rule

This component offers the flexibility of the DB query and covers all possible SQL queries.