tOracleInput - 6.3

Talend Components Reference Guide

EnrichVersion
6.3
EnrichProdName
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for Data Quality
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Real-Time Big Data Platform
task
Data Governance
Data Quality and Preparation
Design and Development
EnrichPlatform
Talend Studio

Function

tOracleInput reads a database and extracts fields based on a query.

Purpose

tOracleInput executes a DB query with a strictly defined order which must correspond to the schema definition. Then it passes on the field list to the next component via a Main row link.

Depending on the Talend solution you are using, this component can be used in one, some or all of the following Job frameworks:

tOracleInput properties

Component family

Databases/Oracle

 

Basic settings

Property type

Either Built-In or Repository.

Built-In: No property data stored centrally.

Repository: Select the repository file where the properties are stored.

 

Click this icon to open a database connection wizard and store the database connection parameters you set in the component Basic settings view.

For more information about setting up and storing database connection parameters, see Talend Studio User Guide.

 

Use an existing connection

Select this check box and in the Component List click the relevant connection component to reuse the connection details you already defined.

Note

When a Job contains the parent Job and the child Job, if you need to share an existing connection between the two levels, for example, to share the connection created by the parent Job with the child Job, you have to:

  1. In the parent level, register the database connection to be shared in the Basic settings view of the connection component which creates that very database connection.

  2. In the child level, use a dedicated connection component to read that registered database connection.

For an example about how to share a database connection across Job levels, see Talend Studio User Guide.

 

Connection type

Drop-down list of available drivers:

Oracle OCI: Select this connection type to use Oracle Call Interface with a set of C-language software APIs that provide an interface to the Oracle database.

Oracle Custom: Select this connection type to access a clustered database.

Oracle Service Name: Select this connection type to use the TNS alias that you give when you connect to the remote database.

WALLET: Select this connection type to store credentials in an Oracle wallet.

Oracle SID: Select this connection type to uniquely identify a particular database on a system.

 

DB Version

Select the Oracle version in use.

 

Host

Database server IP address.

 

Port

Listening port number of DB server.

 

Database

Name of the database.

 

Oracle schema

Oracle schema name.

 

Username and Password

DB user authentication data.

To enter the password, click the [...] button next to the password field, and then in the pop-up dialog box enter the password between double quotes and click OK to save the settings.

 

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

This component offers the advantage of the dynamic schema feature. This allows you to retrieve unknown columns from source files or to copy batches of columns from a source without mapping each column individually. For further information about dynamic schemas, see Talend Studio User Guide.

This dynamic schema feature is designed for the purpose of retrieving unknown columns of a table and is recommended to be used for this purpose only; it is not recommended for the use of creating tables.

 

 

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

 

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

 

Table name

Database table name.

 

Query type and Query

Enter your DB query paying particularly attention to properly sequence the fields in order to match the schema definition.

Warning

If using the dynamic schema feature, the SELECT query must include the * wildcard, to retrieve all of the columns from the table selected.

  Specify a data source alias

Select this check box and specify the alias of a data source created on the Talend Runtime side to use the shared connection pool defined in the data source configuration. This option works only when you deploy and run your Job in Talend Runtime. For a related use case, see Scenario 4: Retrieving data from a MySQL database using the data source on Talend Runtime side to set up the database connection.

Warning

If you use the component's own DB configuration, your data source connection will be closed at the end of the component. To prevent this from happening, use a shared DB connection with the data source alias specified.

This check box is not available when the Use an existing connection check box is selected.

Advanced settings

Additional JDBC parameters

Specify additional connection properties for the database connection you are creating. The properties are separated by semicolon and each property is a key-value pair, for example, encryption=1;clientname=Talend.

This field is not available if the Use an existing connection check box is selected.

 

tStatCatcher Statistics

Select this check box to collect log data at the component level.

 

Use cursor

Select this check box and in the Cursor size field displayed, specify the number of rows to fetch in one go from the database. The performance can be improved by tuning this fetch size to an appropriate value.

 

Trim all the String/Char columns

Select this check box to remove leading and trailing whitespace from all the String/Char columns.

 

Trim column

Remove leading and trailing whitespace from defined columns.

 

No null values

Check this box to improve the performance if there are no null values.

Dynamic settings

Click the [+] button to add a row in the table and fill the Code field with a context variable to choose your database connection dynamically from multiple connections planned in your Job. This feature is useful when you need to access database tables having the same data structure but in different databases, especially when you are working in an environment where you cannot change your Job settings, for example, when your Job has to be deployed and executed independent of Talend Studio.

The Dynamic settings table is available only when the Use an existing connection check box is selected in the Basic settings view. Once a dynamic parameter is defined, the Component List box in the Basic settings view becomes unusable.

For examples on using dynamic parameters, see Scenario 3: Reading data from MySQL databases through context-based dynamic connections and Scenario: Reading data from different MySQL databases using dynamically loaded connection parameters. For more information on Dynamic settings and context variables, see Talend Studio User Guide.

Global Variables 

NB_LINE: the number of rows processed. This is an After variable and it returns an integer.

QUERY: the query statement being processed. This is a Flow variable and it returns a string.

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.

Usage

This component covers all possible SQL queries for Oracle databases.

Log4j

If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Limitation

Due to license incompatibility, one or more JARs required to use this component are not provided. You can install the missing JARs for this particular component by clicking the Install button on the Component tab view. You can also find out and add all missing JARs easily on the Modules tab in the Integration perspective of your studio. For details, see the article Installing External Modules on Talend Help Center (https://help.talend.com) how to configure the Studio in the Talend Installation Guide.

Scenario 1: Using context parameters when reading a table from an Oracle database

In this scenario, we will read a table from an Oracle database using a context parameter to refer to the table name.

Dropping and linking the components

  1. Create a new Job and add the following components by typing their names in the design workspace or dropping them from the Palette: a tOracleInput component and a tLogRow component.

  2. Connect tOracleInput to tLogRow using a Row > Main link.

Configuring the components

  1. Double-click tOracleInput to open its Basic Settings view in the Component tab.

  2. Select a connection type from the Connection Type drop-down list. In this example, it is Oracle SID.

    Select the version of the Oracle database to be used from the DB Version drop-down list. In this example, it is Oracle 12-7.

    In the Host field, enter the Oracle database server's IP address. In this example, it is 192.168.31.32.

    In the Database field, enter the database name. In this example, it is TALEND.

    In the Oracle schema field, enter the Oracle schema name. In this example, it is TALEND.

    In the Username and Password fields, enter the authentication details.

  3. Click the [...] button next to Edit schema to open the schema editor.

  4. Click the [+] button to add four columns: ID and AGE of the integer type, NAME and SEX of the string type.

    Click OK to close the schema editor and accept the propagation prompted by the pop-up dialog box.

  5. Put the cursor in the Table Name field and press F5 for context parameter setting. The dialog box [New Context Parameter] pops up.

    For more information about context settings, see Talend Studio User Guide.

  6. In the Name field, enter the context parameter name. In this example, it is TABLE.

    In the Default value field, enter the name of the Oracle database table to be queried. In this example, it is PERSON.

  7. Click Finish to validate the setting.

    The context parameter context.TABLE automatically appears in the Table Name field.

  8. In the Query Type list, select Built-In. Then, click Guess Query to get the query statement.

    "SELECT 
      "+context.TABLE+".\"ID\", 
      "+context.TABLE+".NAME, 
      "+context.TABLE+".SEX, 
      "+context.TABLE+".AGE
    FROM "+context.TABLE
  9. Double-click tLogRow to open its Basic settings view in the Component tab.

  10. In the Mode area, select Table (print values in cells of a table) for a better display of the results.

Save and executing the Job

  1. Press Ctrl + S to save the Job.

  2. Press F6 to run the Job.

    As shown above, the data in the Oracle database table PERSON is displayed on the console.

tOracleInput properties in MapReduce Jobs

Component family

Databases/Oracle

 

Basic settings

Property type

Either Built-In or Repository.

Built-In: No property data stored centrally.

Repository: Select the repository file where the properties are stored.

 

Click this icon to open a database connection wizard and store the database connection parameters you set in the component Basic settings view.

For more information about setting up and storing database connection parameters, see Talend Studio User Guide.

 

Connection type

Drop-down list of available drivers:

Oracle OCI: Select this connection type to use Oracle Call Interface with a set of C-language software APIs that provide an interface to the Oracle database.

Oracle Custom: Select this connection type to access a clustered database.

Oracle Service Name: Select this connection type to use the TNS alias that you give when you connect to the remote database.

WALLET: Select this connection type to store credentials in an Oracle wallet.

Oracle SID: Select this connection type to uniquely identify a particular database on a system.

 

DB Version

Select the Oracle version in use.

 

Host

Database server IP address.

 

Port

Listening port number of DB server.

 

Database

Name of the database.

 

Oracle schema

Oracle schema name.

 

Username and Password

DB user authentication data.

To enter the password, click the [...] button next to the password field, and then in the pop-up dialog box enter the password between double quotes and click OK to save the settings.

 

Schema and Edit Schema

A schema is a row description, it defines the number of fields to be processed and passed on to the next component. The schema is either Built-in or stored remotely in the Repository.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

 

 

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

 

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

 

Table name

Database table name.

 

Query type and Query

Enter your DB query paying particularly attention to properly sequence the fields in order to match the schema definition.

 

Die on error

Select this check box to stop the execution of the Job when an error occurs.

Clear the check box to skip any rows on error and complete the process for error-free rows. When errors are skipped, you can collect the rows on error using a Row > Reject link.

Usage in Map/Reduce Jobs

In a Talend Map/Reduce Job, it is used as a start component and requires a transformation component as output link. The other components used along with it must be Map/Reduce components, too. They generate native Map/Reduce code that can be executed directly in Hadoop.

You need to use the Hadoop Configuration tab in the Run view to define the connection to a given Hadoop distribution for the whole Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs, and non Map/Reduce Jobs.

Hadoop Connection

You need to use the Hadoop Configuration tab in the Run view to define the connection to a given Hadoop distribution for the whole Job.

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Map/Reduce version of this component yet.

tOracleInput properties in Spark Batch Jobs

Component family

Databases/Oracle

 

Basic settings

Property type

Either Built-In or Repository.

Built-In: No property data stored centrally.

Repository: Select the repository file where the properties are stored.

 

Click this icon to open a database connection wizard and store the database connection parameters you set in the component Basic settings view.

For more information about setting up and storing database connection parameters, see Talend Studio User Guide.

 

Use an existing connection

Select this check box and in the Component List click the relevant connection component to reuse the connection details you already defined.

Note

When a Job contains the parent Job and the child Job, if you need to share an existing connection between the two levels, for example, to share the connection created by the parent Job with the child Job, you have to:

  1. In the parent level, register the database connection to be shared in the Basic settings view of the connection component which creates that very database connection.

  2. In the child level, use a dedicated connection component to read that registered database connection.

For an example about how to share a database connection across Job levels, see Talend Studio User Guide.

 

Connection type

The available drivers are:

  • Oracle OCI: Select this connection type to use Oracle Call Interface with a set of C-language software APIs that provide an interface to the Oracle database.

  • Oracle Custom: Select this connection type to access a clustered database. With this type of connection, the Username and the Password fields are deactivated and you need to enter the connection URL in the URL field that is displayed.

    For further information about the valid form of this URL, see JDBC Connection strings from the Oracle documentation.

  • Oracle Service Name: Select this connection type to use the TNS alias that you give when you connect to the remote database.

  • WALLET: Select this connection type to store credentials in an Oracle wallet.

  • Oracle SID: Select this connection type to uniquely identify a particular database on a system.

 

DB Version

Select the Oracle version in use.

 

Host

Database server IP address.

 

Port

Listening port number of DB server.

 

Database

Name of the database.

 

Oracle schema

Oracle schema name.

 

Username and Password

DB user authentication data.

To enter the password, click the [...] button next to the password field, and then in the pop-up dialog box enter the password between double quotes and click OK to save the settings.

 

Schema and Edit Schema

A schema is a row description, it defines the number of fields to be processed and passed on to the next component. The schema is either Built-in or stored remotely in the Repository.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

 

 

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

 

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

 

Table Name

Type in the name of the table from which you need to read data.

 

Query type and Query

Enter your database query paying particularly attention to properly sequence the fields in order to match the schema definition.

If you are using Spark V2.0 onwards, the Spark SQL does not recognize the prefix of a database table anymore. This means that you must enter only the table name without adding any prefix that indicates for example the schema this table belongs to.

For example, if you need to perform a query in a table system.mytable, in which the system prefix indicates the schema that the mytable table belongs to, in the query, you must enter mytable only.

Advanced settings

Additional JDBC parameters

Specify additional connection properties for the database connection you are creating. The properties are separated by semicolon and each property is a key-value pair, for example, encryption=1;clientname=Talend.

This field is not available if the Use an existing connection check box is selected.

 

Spark SQL JDBC parameters

Add the JDBC properties supported by Spark SQL to this table. For a list of the user configurable properties, see JDBC to other database.

This component automatically set the url, dbtable and driver properties by using the configuration from the Basic settings tab.

 

Trim all the String/Char columns

Select this check box to remove leading and trailing whitespace from all the String/Char columns.

 

Trim column

Remove leading and trailing whitespace from defined columns.

 

Enable partitioning

Select this check box to read data in partitions.

Define, within double quotation marks, the following parameters to configure the partitioning:

  • Partition column: the numeric column used as partition key.

  • Lower bound of the partition stride and Upper bound of the partition stride: enter the upper bounds and the lower bound to determine the partition stride. These bounds do not filter the table rows. All rows in the table are partitioned and returned.

  • Number of partitions: the number of partitions into which the table rows are split. Each Spark worker handles only one of the partitions at a time.

The average size of the partitions is the result of the difference between the upper bound and the lower bound divided by the number of partitions, that is to say, (upperBound - lowerBound)/partitionNumber, while the first and the last partitions also include all the other rows that are not contained in the other partitions.

For example, to partition 1000 rows into 4 partitions, if you enter 0 for the lower bound and 1000 for the upper bound, each partition will contain 250 rows and so the partitioning is even. If you enter 250 for the lower bound and 750 for the upper bound, the second and the third partition will each contain 125 rows and the first and the last partitions each 375 rows. With this configuration, the partitioning is skewed.

Usage in Spark Batch Jobs

This component is used as a start component and requires an output link..

This component should use a tOracleConfiguration component present in the same Job to connect to Oracle. You need to select the Use an existing connection check box and then select the tOracleConfiguration component to be used.

This component, along with the Spark Batch component Palette it belongs to, appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs.

Log4j

If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Spark Connection

You need to use the Spark Configuration tab in the Run view to define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, one and only one file system related component from the Storage family is required in the same Job so that Spark can use this component to connect to the file system to which the jar files dependent on the Job are transferred:

This connection is effective on a per-Job basis.

Related scenarios

For a scenario about how to use the same type of component in a Spark Batch Job, see Writing and reading data from MongoDB using a Spark Batch Job.