tJDBCInput - 6.3

Talend Components Reference Guide

EnrichVersion
6.3
EnrichProdName
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for Data Quality
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Real-Time Big Data Platform
task
Data Governance
Data Quality and Preparation
Design and Development
EnrichPlatform
Talend Studio

Function

tJDBCInput reads any database using a JDBC API connection and extracts fields based on a query.

Purpose

tJDBCInput executes a database query with a strictly defined order which must correspond to the schema definition. Then it passes on the field list to the next component via a Main row link.

Depending on the Talend solution you are using, this component can be used in one, some or all of the following Job frameworks:

  • Standard: see tJDBCInput properties.

    The component in this framework is generally available.

  • MapReduce: see tJDBCInput in Talend Map/Reduce Jobs.

    The component in this framework is available only if you have subscribed to one of the Talend solutions with Big Data.

  • Spark Batch: see tJDBCInput in Spark Batch Jobs.

    This component also allows you to connect and read data from a RDS MariaDB, a RDS PostgreSQL or a RDS SQLServer database.

    The component in this framework is available only if you have subscribed to one of the Talend solutions with Big Data.

tJDBCInput properties

Component family

Databases/JDBC

 

Basic settings

Property type

Either Built-In or Repository.

 

 

Built-In: No property data stored centrally.

 

 

Repository: Select the repository file where the properties are stored.

 

Use an existing connection

Select this check box and in the Component List click the relevant connection component to reuse the connection details you already defined.

Note

When a Job contains the parent Job and the child Job, if you need to share an existing connection between the two levels, for example, to share the connection created by the parent Job with the child Job, you have to:

  1. In the parent level, register the database connection to be shared in the Basic settings view of the connection component which creates that very database connection.

  2. In the child level, use a dedicated connection component to read that registered database connection.

For an example about how to share a database connection across Job levels, see Talend Studio User Guide.

 

Click this icon to open a database connection wizard and store the database connection parameters you set in the component Basic settings view.

For more information about setting up and storing database connection parameters, see Talend Studio User Guide.

 

JDBC URL

Type in the database location path. For example, if a MySQL database called Talend is hosted by a machine located at an IP address XX.XX.XX.XX and the port is 3306, then the URL should be jdbc:mysql://XX.XX.XX.XX:3306/Talend.

 

Driver JAR

Click the plus button under the table to add lines of the count of your need for the purpose of loading several JARs. Then on each line, click the three dot button to open the Select Module wizard from which you can select a driver JAR of your interest for each line.

 

Class Name

Type in the Class name to be pointed to in the driver. For example, for the mysql-connector-java-5.1.2.jar driver, the name to be entered is org.gjt.mm.mysql.Driver.

 

Username and Password

Enter the authentication information to the database you need to connect to.

To enter the password, click the [...] button next to the password field, and then in the pop-up dialog box enter the password between double quotes and click OK to save the settings.

 

Schema and Edit schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

This component offers the advantage of the dynamic schema feature. This allows you to retrieve unknown columns from source files or to copy batches of columns from a source without mapping each column individually. For further information about dynamic schemas, see Talend Studio User Guide.

This dynamic schema feature is designed for the purpose of retrieving unknown columns of a table and is recommended to be used for this purpose only; it is not recommended for the use of creating tables.

 

 

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

 

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

  

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

 

Table Name

Type in the name of the table from which you need to read data.

 

Query type and Query

Enter your database query paying particularly attention to properly sequence the fields in order to match the schema definition.

If using the dynamic schema feature, the SELECT query must include the * wildcard, to retrieve all of the columns from the table selected.

 

Specify a data source alias

Select this check box and specify the alias of a data source created on the Talend Runtime side to use the shared connection pool defined in the data source configuration. This option works only when you deploy and run your Job in Talend Runtime. For a related use case, see Scenario 4: Retrieving data from a MySQL database using the data source on Talend Runtime side to set up the database connection.

Warning

If you use the component's own DB configuration, your data source connection will be closed at the end of the component. To prevent this from happening, use a shared DB connection with the data source alias specified.

This check box is not available when the Use an existing connection check box is selected.

Advanced settings

Use cursor

Select this check box to specify the number of rows you want to work with at any given time. This option optimises performance.

 

Trim all the String/Char columns

Select this check box to remove leading and trailing whitespace from all the String/Char columns.

 

Trim column

This table is filled automatically with the schema being used. Select the check box(es) corresponding to the column(s) to be trimmed.

 

Enable Mapping File for Dynamic

Select this check box to use the specified metadata mapping file when reading data from a dynamic type column. This check box is cleared by default.

For more information about metadata mapping files, see the section on type conversion of Talend Studio User Guide.

 

Mapping File

Specify the metadata mapping file to use by selecting a type of database from the list.

This list field appears only when the Enable Mapping File for Dynamic check box is selected.

 

tStatCatcher Statistics

Select this check box to collect log data at the component level.

Dynamic settings

Click the [+] button to add a row in the table and fill the Code field with a context variable to choose your database connection dynamically from multiple connections planned in your Job. This feature is useful when you need to access database tables having the same data structure but in different databases, especially when you are working in an environment where you cannot change your Job settings, for example, when your Job has to be deployed and executed independent of Talend Studio.

The Dynamic settings table is available only when the Use an existing connection check box is selected in the Basic settings view. Once a dynamic parameter is defined, the Component List box in the Basic settings view becomes unusable.

For examples on using dynamic parameters, see Scenario 3: Reading data from MySQL databases through context-based dynamic connections and Scenario: Reading data from different MySQL databases using dynamically loaded connection parameters. For more information on Dynamic settings and context variables, see Talend Studio User Guide.

Global Variables

NB_LINE: the number of rows processed. This is an After variable and it returns an integer.

QUERY: the query statement being processed. This is a Flow variable and it returns a string.

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.

Usage

This component covers all possible SQL queries for any database using a JDBC connection.

Log4j

If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

tJDBCInput in Talend Map/Reduce Jobs

Warning

The information in this section is only for users that have subscribed to one of the Talend solutions with Big Data and is not applicable to Talend Open Studio for Big Data users.

In a Talend Map/Reduce Job, tJDBCInput, as well as the other Map/Reduce components preceding it, generates native Map/Reduce code. This section presents the specific properties of tJDBCInput when it is used in that situation. For further information about a Talend Map/Reduce Job, see Talend Big Data Getting Started Guide.

Component family

MapReduce/Input

 

Basic settings

Property type

Either Built-In or Repository.

 

 

Built-In: No property data stored centrally.

 

 

Repository: Select the repository file where the properties are stored.

 

Click this icon to open a database connection wizard and store the database connection parameters you set in the component Basic settings view.

For more information about setting up and storing database connection parameters, see Talend Studio User Guide.

 

JDBC URL

Type in the database location path. For example, if a MySQL database called Talend is hosted by a machine located at an IP address XX.XX.XX.XX and the port is 3306, then the URL should be jdbc:mysql://XX.XX.XX.XX:3306/Talend.

 

Driver JAR

Click the plus button under the table to add lines of the count of your need for the purpose of loading several JARs. Then on each line, click the three dot button to open the Select Module wizard from which you can select a driver JAR of your interest for each line.

 

Class Name

Type in the Class name to be pointed to in the driver. For example, for the mysql-connector-java-5.1.2.jar driver, the name to be entered is org.gjt.mm.mysql.Driver.

 

Username and Password

Type in the database location path. For example, if a MySQL database called Talend is hosted by a machine located at an IP address XX.XX.XX.XX and the port is 3306, then the URL should be jdbc:mysql://XX.XX.XX.XX:3306/Talend.

To enter the password, click the [...] button next to the password field, and then in the pop-up dialog box enter the password between double quotes and click OK to save the settings.

 

Schema and Edit schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

 

 

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

 

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

  

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

 

Table Name

Type in the name of the table from which you need to read data.

 

Die on error

Select this check box to stop the execution of the Job when an error occurs.

Clear the check box to skip any rows on error and complete the process for error-free rows. When errors are skipped, you can collect the rows on error using a Row > Reject link.

 

Query type and Query

Enter your database query paying particularly attention to properly sequence the fields in order to match the schema definition.

If using the dynamic schema feature, the SELECT query must include the * wildcard, to retrieve all of the columns from the table selected.

Usage in Map/Reduce Jobs

In a Talend Map/Reduce Job, it is used as a start component and requires a transformation component as output link. The other components used along with it must be Map/Reduce components, too. They generate native Map/Reduce code that can be executed directly in Hadoop.

For further information about a Talend Map/Reduce Job, see the sections describing how to create, convert and configure a Talend Map/Reduce Job of the Talend Big Data Getting Started Guide.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs, and non Map/Reduce Jobs.

Hadoop Connection

You need to use the Hadoop Configuration tab in the Run view to define the connection to a given Hadoop distribution for the whole Job.

This connection is effective on a per-Job basis.

Limitation

We recommend using the following databases with the Map/Reduce version of this component: DB2, Informix, MSSQL, MySQL, Netezza, Oracle, Postgres, Teradata and Vertica.

It may work with other databases as well, but these may not necessarily have been tested.

Related scenarios

No scenario is available for the Map/Reduce version of this component yet.

tJDBCInput in Spark Batch Jobs

Component family

Databases/DB JDBC

 

Basic settings

Property type

Either Built-In or Repository.

 

 

Built-In: No property data stored centrally.

 

 

Repository: Select the repository file where the properties are stored.

 

Use an existing connection

Select this check box and in the Component List click the relevant connection component to reuse the connection details you already defined.

 

JDBC URL

Type in the database location path. For example, if a MySQL database called Talend is hosted by a machine located at an IP address XX.XX.XX.XX and the port is 3306, then the URL should be jdbc:mysql://XX.XX.XX.XX:3306/Talend.

If you are using Spark V1.3, this URL should contain the authentication information, such as:

jdbc:mysql://XX.XX.XX.XX:3306/Talend?user=ychen&password=talend

 

Driver JAR

Click the plus button under the table to add lines of the count of your need for the purpose of loading several JARs. Then on each line, click the three dot button to open the Select Module wizard from which you can select a driver JAR of your interest for each line.

 

Class Name

Type in the Class name to be pointed to in the driver. For example, for the mysql-connector-java-5.1.2.jar driver, the name to be entered is org.gjt.mm.mysql.Driver.

 

Username and Password

Enter the authentication information to the database you need to connect to.

To enter the password, click the [...] button next to the password field, and then in the pop-up dialog box enter the password between double quotes and click OK to save the settings.

Available only for Spark V1.4. and onwards.

 

Schema and Edit schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

 

 

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

 

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

  

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

 

Table Name

Type in the name of the table from which you need to read data.

 

Query type and Query

Enter your database query paying particularly attention to properly sequence the fields in order to match the schema definition.

If you are using Spark V2.0 onwards, the Spark SQL does not recognize the prefix of a database table anymore. This means that you must enter only the table name without adding any prefix that indicates for example the schema this table belongs to.

For example, if you need to perform a query in a table system.mytable, in which the system prefix indicates the schema that the mytable table belongs to, in the query, you must enter mytable only.

 

Guess Query

Click the Guess Query button to generate the query which corresponds to your table schema in the Query field.

 

Guess schema

Click the Guess schema button to retrieve the table schema.

Advanced settings

Additional JDBC parameters

Specify additional connection properties for the database connection you are creating. The properties are separated by semicolon and each property is a key-value pair, for example, encryption=1;clientname=Talend.

This field is not available if the Use an existing connection check box is selected.

 

Spark SQL JDBC parameters

Add the JDBC properties supported by Spark SQL to this table. For a list of the user configurable properties, see JDBC to other database.

This component automatically set the url, dbtable and driver properties by using the configuration from the Basic settings tab.

 

Use cursor

Select this check box to specify the number of rows you want to work with at any given time. This option optimises performance.

 

Trim all the String/Char columns

Select this check box to remove leading and trailing whitespace from all the String/Char columns.

 

Trim column

This table is filled automatically with the schema being used. Select the check box(es) corresponding to the column(s) to be trimmed.

 

Enable partitioning

Select this check box to read data in partitions.

Define, within double quotation marks, the following parameters to configure the partitioning:

  • Partition column: the numeric column used as partition key.

  • Lower bound of the partition stride and Upper bound of the partition stride: enter the upper bounds and the lower bound to determine the partition stride. These bounds do not filter the table rows. All rows in the table are partitioned and returned.

  • Number of partitions: the number of partitions into which the table rows are split. Each Spark worker handles only one of the partitions at a time.

The average size of the partitions is the result of the difference between the upper bound and the lower bound divided by the number of partitions, that is to say, (upperBound - lowerBound)/partitionNumber, while the first and the last partitions also include all the other rows that are not contained in the other partitions.

For example, to partition 1000 rows into 4 partitions, if you enter 0 for the lower bound and 1000 for the upper bound, each partition will contain 250 rows and so the partitioning is even. If you enter 250 for the lower bound and 750 for the upper bound, the second and the third partition will each contain 125 rows and the first and the last partitions each 375 rows. With this configuration, the partitioning is skewed.

Usage in Spark Batch Jobs

This component is used as a start component and requires an output link..

This component should use a tJDBCConfiguration component present in the same Job to connect to a database. You need to drop a tJDBCConfiguration component alongside this component and configure the Basic settings of this component to use tJDBCConfiguration.

This component, along with the Spark Batch component Palette it belongs to, appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs.

Log4j

If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Spark Connection

You need to use the Spark Configuration tab in the Run view to define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, one and only one file system related component from the Storage family is required in the same Job so that Spark can use this component to connect to the file system to which the jar files dependent on the Job are transferred:

This connection is effective on a per-Job basis.

Related scenarios

For a scenario about how to use the same type of component in a Spark Batch Job, see Writing and reading data from MongoDB using a Spark Batch Job.