tExasolBulkExec Standard properties - Cloud - 8.0

Exasol

Version
Cloud
8.0
Language
English
Product
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Real-Time Big Data Platform
Module
Talend Studio
Content
Data Governance > Third-party systems > Database components (Integration) > Exasol components
Data Quality and Preparation > Third-party systems > Database components (Integration) > Exasol components
Design and Development > Third-party systems > Database components (Integration) > Exasol components
Last publication date
2024-02-20

These properties are used to configure tExasolBulkExec running in the Standard Job framework.

The Standard tExasolBulkExec component belongs to the Databases family.

The component in this framework is available in all Talend products.

Basic settings

Use an existing connection

Select this check box and from the list displayed select the relevant connection component to reuse the connection details you have already defined.

Note: When a Job contains the parent Job and the child Job, do the following if you want to share an existing connection between the parent Job and the child Job (for example, to share the connection created by the parent Job with the child Job).
  1. In the parent level, register the database connection to be shared in the Basic settings view of the connection component which creates that very database connection.
  2. In the child level, use a dedicated connection component to read that registered database connection.

For an example about how to share a database connection across Job levels, see Sharing a database connection.

Property Type

Either Built-In or Repository.

  • Built-In: No property data stored centrally.

  • Repository: Select the repository file in which the properties are stored. The database connection fields that follow are completed automatically using the data retrieved.

Host

Enter the host or host list of the Exasol database servers. Exasol can run in a cluster environment. The valid value can be a simple IP address (for example, 172.16.173.128), an IP range list (for example, 172.16.173.128..130 that represents three servers 172.16.173.128, 172.16.173.129, and 172.16.173.130), or a comma-separated host list (for example, server1,server2,server3) of the Exasol database cluster.

Port

Enter the listening port number of the Exasol database cluster.

Schema

Enter the name of the schema you want to use.

User and Password

Enter the user authentication data to access the Exasol database.

To enter the password, click the [...] button next to the password field, enter the password in double quotes in the pop-up dialog box, and click OK to save the settings.

Table

Enter the name of the table to be written.

Note: Typically the table names are stored in upper case. If you need mixed case identifiers, you have to enter the name in double quotes. For example, "\"TEST_data_LOAD\"".

Action on table

On the table defined, you can perform one of the following operations before running the import:

  • None: No operation is carried out.

  • Drop and create table: The table is removed and created again.

  • Create table: The table does not exist and gets created.

  • Create table if not exists: The table is created if it does not exist.

  • Truncate table: The table content is deleted. You do not have the possibility to rollback the operation.

Schema and Edit schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields.

  • Built-In: You create and store the schema locally for this component only.

  • Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs.

Note: The columns in the schema must be in the same order as they are in the CSV file. It is not necessary to fill all columns of the defined table unless the use case or table definition expects that.

 

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion.

    If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the Repository Content window.

Advanced settings

Additional JDBC Parameters

Specify additional connection properties for the database connection you are creating. The properties are separated by semicolon and each property is a key-value pair, for example, encryption=1;clientname=Talend.

This field is not available if the Use an existing connection check box is selected.

Column Formats

Specify the format for Date and numeric columns if the default can not be applied.

  • Column: The cells in this column are automatically filled with the defined schema column names.

  • Has Thousand Delimiters: Select this check box if the value of the corresponding numeric column (only for numeric column) in the file contains thousand separators.

  • Alternative Format: Specify the necessary format as String value if a special format is expected. The necessary format will be created from the schema column length and precision. For more information about format models, see Exasol User Manual.

Source table columns

If the source is a database, configure the mapping between the source columns and the target columns in this table.

Specifically configuring the mapping is optional. If you set nothing here, it is assumed that the source table has the same structure as the target table.

  • Column: The schema column in the target table.

  • Source column name: The name of the column in the source table.

Column Separator

Enter the separator for the columns of a row in the local file.

Column Delimiter

Enter the delimiter that encapsulates the field content in the local file.

Row Separator

Enter the char used to separate the rows in the local file.

Null representation

Enter the string that represents a NULL value in the local file. If not specified, NULL values are represented as the empty string.

Skip rows

Enter the number of rows (for example, header or any other prefix rows) to be omitted.

Encoding

Enter the character set used in the local file. By default, it is UTF8.

Trim column values

Specify whether spaces are deleted at the border of CSV columns.

  • No trim: no spaces are trimmed.

  • Trim: spaces from both left and right sides are trimmed.

  • Trim only left: spaces from only the left side are trimmed.

  • Trim only right: spaces from only the right side are trimmed.

Default Date Format

Specify the format for datetime values. By default, it is YYYY-MM-DD.

Default Timestamp Format

Specify the timestamp format used. By default, it is YYYY-MM-DD HH24:MI:SS.FF3.

Thousands Separator

Specify the character used to separate thousand groups in a numeric text value. In the numeric format, the character will be applied to the placeholder G. If the text values contain this char, you have to configure it also in the Column Formats table.

Note that this setting affects the connection property NLS_NUMERIC_CHARACTERS that defines the decimal and group characters used for representing numbers.

Decimal Separator

Specify the character used to separate the integer part of a number from the fraction. In the numeric format, the character will be applied to the placeholder D.

Note that this setting affects the connection property NLS_NUMERIC_CHARACTERS that defines the decimal and group characters used for representing numbers.

Minimal number errors to reject the transfer

Specify the maximum number of invalid rows allowed during the data loading process. For example, the value 2 means the loading process will stop if the third error occurs.

Log Error Destination

Specify the location where error messages will be stored.

  • No Logging: error messages will not be saved.

  • Local Log File: error messages will be stored in a specified local file.

    • Local Error Log File: specify the path to the local file that stores error messages.

    • Add current timestamp to log file name (before extension): select this check box to add the current timestamp before the extension of the file name for identification reasons in case you use the same file multiple times.

  • Logging Table: error messages will be stored in a specified table. The table will be created if it does not exist.

    • Error Log Table: enter the name of the table that stores error messages.

    • Use current timestamp to build log table: select this check box to use the current timestamp to build the log table for identification reasons in case you use the same table multiple times.

Transfer files secure

Select this check box to transfer the file over HTTPS instead of HTTP.

Test mode (no statements are executed)

Select this check box to have the component running in test mode, where no statements are executed.

Use precision and length from schema

Select this check box to check column values that are of numeric types (that is, Double, Float, BigDecimal, Integer, Long, and Short) against the Length setting (which sets the number of integer digits) and the Precision setting (which sets the number of decimal digits) in the schema. Only the values with neither their number of integer digits nor number of decimal digits larger than the Length setting and the Precision setting are loaded.

For example, with Length set to 4 and Precision set to 3, the values 8888.8888 and 88888.888 will be dropped; the values 8888.88 and 888.888 will be loaded.

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at the Job level as well as at each component level.

Global Variables

Global Variables

NB_LINE_INSERTED: the number of rows inserted. This is an After variable and it returns an integer.

NB_LINE_DELETED: the number of rows deleted. This is an After variable and it returns an integer.

FILENAME: the name of the file processed. This is an After variable and it returns a string.

ERROR_LOG_FILE: the path to the local log file. This is an After variable and it returns a string.

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl+Space to access the variable list and choose the variable to use from it.

For more information about variables, see Using contexts and variables.

Usage

Usage rule

This component is usually used as a standalone component.

Dynamic settings

Click the [+] button to add a row in the table and fill the Code field with a context variable to choose your database connection dynamically from multiple connections planned in your Job. This feature is useful when you need to access database tables having the same data structure but in different databases, especially when you are working in an environment where you cannot change your Job settings, for example, when your Job has to be deployed and executed independent of Talend Studio.

The Dynamic settings table is available only when the Use an existing connection check box is selected in the Basic settings view. Once a dynamic parameter is defined, the Component List box in the Basic settings view becomes unusable.

For examples on using dynamic parameters, see Reading data from databases through context-based dynamic connections and Reading data from different MySQL databases using dynamically loaded connection parameters. For more information on Dynamic settings and context variables, see Dynamic schema and Creating a context group and define context variables in it.