tEXABulkExec Properties - 6.1

Talend Components Reference Guide

EnrichVersion
6.1
EnrichProdName
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for Data Quality
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Real-Time Big Data Platform
task
Data Governance
Data Quality and Preparation
Design and Development
EnrichPlatform
Talend Studio

Component family

Databases/EXASolution

Basic settings

Use an existing connection

Select this check box and from the list that appears select the relevant connection component to reuse the connection details you have already defined.

Note

When a Job contains the parent Job and the child Job, if you need to share an existing connection between the two levels, for example, to share the connection created by the parent Job with the child Job, you have to:

  1. In the parent level, register the database connection to be shared in the Basic settings view of the connection component which creates that very database connection.

  2. In the child level, use a dedicated connection component to read that registered database connection.

For an example about how to share a database connection across Job levels, see Talend Studio User Guide.

Property Type

Either Built-In or Repository.

  • Built-In: No property data stored centrally.

  • Repository: Select the repository file in which the properties are stored. The database connection fields that follow are completed automatically using the data retrieved.

Since version 5.6, both the Built-In mode and the Repository mode are available in any of the Talend solutions.

 

Host

Enter the host or host list of the EXASol database servers. EXASol can run in a cluster environment. The valid value can be a simple IP address (e.g. 172.16.173.128), an IP range list (e.g. 172.16.173.128..130 that represents three servers 172.16.173.128, 172.16.173.129, and 172.16.173.130), or a comma-separated host list (e.g. server1,server2,server3) of the EXASolution database cluster.

 

Port

Enter the listening port number of the EXASolution database cluster.

 

Schema

Enter the name of the schema you want to use.

 

User and Password

Enter the user authentication data to access the EXASolution database.

To enter the password, click the [...] button next to the password field, and then in the pop-up dialog box enter the password between double quotes and click OK to save the settings.

 

Table

Enter the name of the table to be written.

Note

Typically the table names are stored in upper case. If you need mixed case identifiers, you have to enter the name in double quotes. For example, "\"TEST_data_LOAD\"".

 

Action on table

On the table defined, you can perform one of the following operations before running the import:

  • None: No operation is carried out.

  • Drop and create table: The table is removed and created again.

  • Create table: The table does not exist and gets created.

  • Create table if not exists: The table is created if it does not exist.

  • Truncate table: The table content is deleted. You do not have the possibility to rollback the operation.

 

Local Filename

Specify the path to the local file to be imported.

 

Schema and Edit schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

  • Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

  • Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

Since version 5.6, both the Built-In mode and the Repository mode are available in any of the Talend solutions.

Note

The columns in the schema must be in the same order as they are in the CSV file. It is not necessary to fill all columns of the defined table unless the use case or table definition expects that.

 

 

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

Advanced settings

Additional JDBC Parameters

Specify additional connection properties for the database connection you are creating. The properties are separated by semicolon and each property is a key-value pair, for example, encryption=1;clientname=Talend.

This field is not available if the Use an existing connection check box is selected.

 

Column Formats

Specify the format for Date and numeric columns if the default can not be applied.

  • Column: The cells in this column are automatically filled with the defined schema column names.

  • Has Thousand Delimiters: Select this check box if the value of the corresponding numeric column (only for numeric column) in the file contains thousand separators. The necessary format will be created from the schema column length and precision. For more information about elements of numeric format models, see EXASolution User Manual.

  • Alternative Format: Specify the necessary format as String value if a special format is expected. For more information about format models, see EXASolution User Manual.

 

Column Separator

Enter the separator for the columns of a row in the local file.

 

Column Delimiter

Enter the delimiter that encapsulates the field content in the local file.

 

Row Separator

Enter the char used to separate the rows in the local file.

 

Null representation

Enter the string that represents a NULL value in the local file. If not specified, NULL values are represented as the empty string.

 

Skip rows

Enter the number of rows (for example, header or any other prefix rows) to be omitted.

 

Encoding

Enter the character set used in the local file. By default, it is UTF8.

 

Trim column values

Specify whether spaces are deleted at the border of CSV columns.

  • No trim: no spaces are trimmed.

  • Trim: spaces from both left and right sides are trimmed.

  • Trim only left: spaces from only the left side are trimmed.

  • Trim only right: spaces from only the right side are trimmed.

 

Default Date Format

Specify the format for datetime values. By default, it is YYYY-MM-DD.

 

Default Timestamp Format

Specify the timestamp format used. By default, it is YYYY-MM-DD HH24:MI:SS.FF3.

 

Thousands Separator

Specify the character used to separate thousand groups in a numeric text value. In the numeric format, the character will be applied to the placeholder G. If the text values contain this char, you have to configure it also in the Column Formats table.

Note that this setting affects the connection property NLS_NUMERIC_CHARACTERS that defines the decimal and group characters used for representing numbers.

 

Decimal Separator

Specify the character used to separate the integer part of a number from the fraction. In the numeric format, the character will be applied to the placeholder D.

Note that this setting affects the connection property NLS_NUMERIC_CHARACTERS that defines the decimal and group characters used for representing numbers.

 

Minimal number errors to reject the transfer

Specify the maximum number of invalid rows allowed during the data loading process. For example, the value 2 means the loading process will stop if the third error occurs.

 

Log Error Destination

Specify the location where error messages will be stored.

  • No Logging: error messages will not be saved.

  • Local Log File: error messages will be stored in a specified local file.

    • Local Error Log File: specify the path to the local file that stores error messages.

    • Add current timestamp to log file name (before extension): select this check box to add the current timestamp before the extension of the file name for identification reasons in case you use the same file multiple times.

  • Logging Table: error messages will be stored in a specified table. The table will be created if it does not exist.

    • Error Log Table: enter the name of the table that stores error messages.

    • Use current timestamp to build log table: select this check box to use the current timestamp to build the log table for identification reasons in case you use the same table multiple times.

 

Transfer files secure

Select this check box to transfer the file over HTTPS instead of HTTP.

 

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at the Job level as well as at each component level.

Dynamic settings

Click the [+] button to add a row in the table and fill the Code field with a context variable to choose your database connection dynamically from multiple connections planned in your Job. This feature is useful when you need to access database tables having the same data structure but in different databases, especially when you are working in an environment where you cannot change your Job settings, for example, when your Job has to be deployed and executed independent of Talend Studio.

The Dynamic settings table is available only when the Use an existing connection check box is selected in the Basic settings view. Once a dynamic parameter is defined, the Component List box in the Basic settings view becomes unusable.

For examples on using dynamic parameters, see Scenario 3: Reading data from MySQL databases through context-based dynamic connections and Scenario: Reading data from different MySQL databases using dynamically loaded connection parameters. For more information on Dynamic settings and context variables, see Talend Studio User Guide.

Global Variables

NB_LINE_INSERTED: the number of rows inserted. This is an After variable and it returns an integer.

NB_LINE_DELETED: the number of rows deleted. This is an After variable and it returns an integer.

FILENAME: the name of the file processed. This is an After variable and it returns a string.

ERROR_LOG_FILE: the path to the local log file. This is an After variable and it returns a string.

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.

Usage

This component is usually used as a standalone component.

Log4j

If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.