tHiveLoad - 6.3

Talend Open Studio for Big Data Components Reference Guide

EnrichVersion
6.3
EnrichProdName
Talend Open Studio for Big Data
task
Data Governance
Data Quality and Preparation
Design and Development
EnrichPlatform
Talend Studio

Function

This component connects to a given Hive database and copies or moves data into an existing Hive table or a directory you specify.

Purpose

This component is used to write data of different formats into a given Hive table or to export data from a Hive table to a directory.

tHiveLoad properties

Component family

Big Data / Hive

 

Basic settings

Property type

Either Built-In or Repository.

Built-In: No property data stored centrally.

Repository: Select the repository file where the properties are stored.

Use an existing connection

Select this check box and in the Component List click the relevant connection component to reuse the connection details you already defined.

Note

When a Job contains the parent Job and the child Job, if you need to share an existing connection between the two levels, for example, to share the connection created by the parent Job with the child Job, you have to:

  1. In the parent level, register the database connection to be shared in the Basic settings view of the connection component which creates that very database connection.

  2. In the child level, use a dedicated connection component to read that registered database connection.

For an example about how to share a database connection across Job levels, see Talend Studio User Guide.

Version

Distribution

Select the cluster you are using from the drop-down list. The options in the list vary depending on the component you are using. Among these options, the following ones requires specific configuration:

  • If available in this Distribution drop-down list, the Microsoft HD Insight option allows you to use a Microsoft HD Insight cluster. For this purpose, you need to configure the connections to the WebHCat service, the HD Insight service and the Windows Azure Storage service of that cluster in the areas that are displayed. A demonstration video about how to configure this connection is available in the following link: https://www.youtube.com/watch?v=A3QTT6VsNoM.

  • If you select Amazon EMR, see the article Amazon EMR - Getting Started on about how to configure the connection on Talend Help Center (https://help.talend.com).

  • The Custom option allows you to connect to a cluster different from any of the distributions given in this list, that is to say, to connect to a cluster not officially supported by Talend.

In order to connect to a custom distribution, once selecting Custom, click the button to display the dialog box in which you can alternatively:

  1. Select Import from existing version to import an officially supported distribution as base and then add other required jar files which the base distribution does not provide.

  2. Select Import from zip to import the configuration zip for the custom distribution to be used. This zip file should contain the libraries of the different Hadoop elements and the index file of these libraries.

    In Talend Exchange, members of Talend community have shared some ready-for-use configuration zip files which you can download from this Hadoop configuration list and directly use them in your connection accordingly. However, because of the ongoing evolution of the different Hadoop-related projects, you might not be able to find the configuration zip corresponding to your distribution from this list; then it is recommended to use the Import from existing version option to take an existing distribution as base to add the jars required by your distribution.

    Note that custom versions are not officially supported by Talend. Talend and its community provide you with the opportunity to connect to custom versions from the Studio but cannot guarantee that the configuration of whichever version you choose will be easy, due to the wide range of different Hadoop distributions and versions that are available. As such, you should only attempt to set up such a connection if you have sufficient Hadoop experience to handle any issues on your own.

    Note

    In this dialog box, the active check box must be kept selected so as to import the jar files pertinent to the connection to be created between the custom distribution and this component.

    For a step-by-step example about how to connect to a custom distribution and share this connection, see Connecting to a custom Hadoop distribution.

 

Hive version

Select the version of the Hadoop distribution you are using. The available options vary depending on the component you are using. Along with the evolution of Hadoop, please note the following changes:

  • If you use Hortonworks Data Platform V2.2, the configuration files of your cluster might be using environment variables such as ${hdp.version}. If this is your situation, you need to set the mapreduce.application.framework.path property in the Hadoop properties table of this component with the path value explicitly pointing to the MapReduce framework archive of your cluster. For example:

    mapreduce.application.framework.path=/hdp/apps/2.2.0.0-2041/mapreduce/mapreduce.tar.gz#mr-framework
  • If you use Hortonworks Data Platform V2.0.0, the type of the operating system for running the distribution and a Talend Job must be the same, such as Windows or Linux. Otherwise, you have to use Talend Jobserver to execute the Job in the same type of operating system in which the Hortonworks Data Platform V2.0.0 distribution you are using is run. For further information about Talend Jobserver, see the Talend Installation and Upgrade Guide.

 

Connection mode

Select a connection mode from the list. The options vary depending on the distribution you are using.

 

Hive server

Select the Hive server through which you want the Job using this component to execute queries on Hive.

This Hive server list is available only when the Hadoop distribution to be used such as HortonWorks Data Platform V1.2.0 (Bimota) supports HiveServer2. It allows you to select HiveServer2 (Hive 2), the server that better support concurrent connections of multiple clients than HiveServer (Hive 1).

For further information about HiveServer2, see https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2.

 

Host

Database server IP address.

 

Port

Listening port number of DB server.

 

Database

Fill this field with the name of the database.

 

Username and Password

DB user authentication data.

To enter the password, click the [...] button next to the password field, and then in the pop-up dialog box enter the password between double quotes and click OK to save the settings.

Authentication

Use kerberos authentication

If you are accessing a Hive Metastore running with Kerberos security, select this check box and then enter the relevant parameters in the fields that appear.

  • If this cluster is a MapR cluster of the version 4.0.1 or later, you can set the MapR ticket authentication configuration in addition or as an alternative by following the explanation in Connecting to a security-enabled MapR.

    Keep in mind that this configuration generates a new MapR security ticket for the username defined in the Job in each execution. If you need to reuse an existing ticket issued for the same username, leave both the Force MapR ticket authentication check box and the Use Kerberos authentication check box clear, and then MapR should be able to automatically find that ticket on the fly.

The values of the following parameters can be found in the hive-site.xml file of the Hive system to be used.

  1. Hive principal uses the value of hive.metastore.kerberos.principal. This is the service principal of the Hive Metastore.

  2. HiveServer2 local user principal uses the value of hive.server2.authentication.kerberos.principal.

  3. HiveServer2 local user keytab uses the value of hive.server2.authentication.kerberos.keytab

  4. Metastore URL uses the value of javax.jdo.option.ConnectionURL. This is the JDBC connection string to the Hive Metastore.

  5. Driver class uses the value of javax.jdo.option.ConnectionDriverName. This is the name of the driver for the JDBC connection.

  6. Username uses the value of javax.jdo.option.ConnectionUserName. This, as well as the Password parameter, is the user credential for connecting to the Hive Metastore.

  7. Password uses the value of javax.jdo.option.ConnectionPassword.

For the other parameters that are displayed, please consult the Hadoop configuration files they belong to. For example, the Namenode principal can be found in the hdfs-site.xml file or the hdfs-default.xml file of the distribution you are using.

This check box is available depending on the Hadoop distribution you are connecting to.

  Use a keytab to authenticate

Select the Use a keytab to authenticate check box to log into a Kerberos-enabled Hadoop system using a given keytab file. A keytab file contains pairs of Kerberos principals and encrypted keys. You need to enter the principal to be used in the Principal field and the access path to the keytab file itself in the Keytab field.

Note that the user that executes a keytab-enabled Job is not necessarily the one a principal designates but must have the right to read the keytab file being used. For example, the user name you are using to execute a Job is user1 and the principal to be used is guest; in this situation, ensure that user1 has the right to read the keytab file to be used.

 

Use SSL encryption

Select this check box to enable the SSL or TLS encrypted connection.

Then in the fields that are displayed, provide the authentication information:

  • In the Trust store path field, enter the path, or browse to the TrustStore file to be used. By default, the supported TrustStore types are JKS and PKCS 12.

  • To enter the password, click the [...] button next to the password field, and then in the pop-up dialog box enter the password between double quotes and click OK to save the settings.

This feature is available only to the HiveServer2 in the Standalone mode of the following distributions:

  • Hortonworks Data Platform 2.0 +

  • Cloudera CDH4 +

  • Pivotal HD 2.0 +

  • Amazon EMR 4.0.0 +

Hadoop properties

Set Jobtracker URI

Select this check box to indicate the location of the Jobtracker service within the Hadoop cluster to be used. For example, we assume that you have chosen a machine called machine1 as the JobTracker, then set its location as machine1:portnumber. A Jobtracker is the service that assigns Map/Reduce tasks to specific nodes in a Hadoop cluster. Note that the notion job in this term JobTracker does not designate a Talend Job, but rather a Hadoop job described as MR or MapReduce job in Apache's Hadoop documentation on http://hadoop.apache.org.

If you use YARN in your Hadoop cluster such as Hortonworks Data Platform V2.0.0 or Cloudera CDH4.3 + (YARN mode), you need to specify the location of the Resource Manager instead of the Jobtracker. Then you can continue to set the following parameters depending on the configuration of the Hadoop cluster to be used (if you leave the check box of a parameter clear, then at runtime, the configuration about this parameter in the Hadoop cluster to be used will be ignored ):

  1. Select the Set resourcemanager scheduler address check box and enter the Scheduler address in the field that appears.

  2. Select the Set jobhistory address check box and enter the location of the JobHistory server of the Hadoop cluster to be used. This allows the metrics information of the current Job to be stored in that JobHistory server.

  3. Select the Set staging directory check box and enter this directory defined in your Hadoop cluster for temporary files created by running programs. Typically, this directory can be found under the yarn.app.mapreduce.am.staging-dir property in the configuration files such as yarn-site.xml or mapred-site.xml of your distribution.

  4. Allocate proper memory volumes to the Map and the Reduce computations and the ApplicationMaster of YARN by selecting the Set memory check box in the Advanced settings view.

  5. Select the Set Hadoop user check box and enter the user name under which you want to execute the Job. Since a file or a directory in Hadoop has its specific owner with appropriate read or write rights, this field allows you to execute the Job directly under the user name that has the appropriate rights to access the file or directory to be processed.

  6. Select the Use datanode hostname check box to allow the Job to access datanodes via their hostnames. This actually sets the dfs.client.use.datanode.hostname property to true. When connecting to a S3N filesystem, you must select this check box.

For further information about these parameters, see the documentation or contact the administrator of the Hadoop cluster to be used.

For further information about the Hadoop Map/Reduce framework, see the Map/Reduce tutorial in Apache's Hadoop documentation on http://hadoop.apache.org.

 

Set NameNode URI

Select this check box to indicate the location of the NameNode of the Hadoop cluster to be used. The NameNode is the master node of a Hadoop cluster. For example, we assume that you have chosen a machine called masternode as the NameNode of an Apache Hadoop distribution, then the location is hdfs://masternode:portnumber.

For further information about the Hadoop Map/Reduce framework, see the Map/Reduce tutorial in Apache's Hadoop documentation on http://hadoop.apache.org.

Microsoft HD Insight properties

WebHCat configuration

Enter the address and the authentication information of the WebHCat service of the Microsoft HD Insight cluster to be used. The Studio uses this service to submit the Job to the HD Insight cluster.

In the Job result folder field, enter the location in which you want to store the execution result of a Job in the Azure Storage to be used.

 

HDInsight configuration

Enter the authentication information of the HD Insight cluster to be used.

 

Windows Azure Storage configuration

Enter the address and the authentication information of the Azure Storage account to be used.

In the Container field, enter the name of the container to be used.

In the Deployment Blob field, enter the location in which you want to store the current Job and its dependent libraries in this Azure Storage account.

 

Load action

Select the action you need to carry for writing data into the specified destination.

  • When you select LOAD, you are moving or copying data from a directory you specify.

  • When you select INSERT, you are moving or copying data based on queries.

 

Execution engine

Select this check box and from the drop-down list, select the framework you need to use to perform the INSERT action.

This list is available only when you are using the Embedded mode for the Hive connection and the distribution you are working with is among the following ones:

  • Hortonworks: V2.1 and V2.2.

  • MapR: V4.0.1.

  • Custom: this option allows you connect to a distribution supporting Tez but not officially supported by Talend.

Before using Tez, ensure that the Hadoop cluster you are using supports Tez. You will need to configure the access to the relevant Tez libraries via the Advanced settings view of this component.

For further information about Hive on Tez, see Apache's related documentation in https://cwiki.apache.org/confluence/display/Hive/Hive+on+Tez. Some examples are presented there to show how Tez can be used to gain performance over MapReduce.

 

Target type

This drop-down list appears only when you have selected INSERT from the Load action list.

Select from this list the type of the location you need to write data in.

  • If you select Table as destination, you can still choose to append data to or overwrite the contents in the specified table.

  • If you select Directory as destination, you are overwriting the contents in the specified directory

 

Table name

Enter the name of the Hive table you need to write data in.

Note that with the INSERT action, this field is available only when you have selected Table from the Target type list.

 

File path

Enter the directory you need to read data from or write data in, depending on the action you have selected from the Load action list.

  • If you have selected LOAD: this is the path to the data you want to copy or move into the specified Hive table.

  • If you have selected INSERT: this is the directory to which you want to export data from a Hive table. With this action, the File path field is available only when you have selected Directory from the Target type list.

 

The target table uses the Parquet format

If the table in which you need to write data is a Parquet table, select this check box.

Note that when the file format to be used is PARQUET, you might be prompted to find the specific Parquet jar file and install it into the Studio.

  • When the connection mode to Hive is Embedded, the Job is run in your local machine and calls this jar installed in the Studio.

  • When the connection mode to Hive is Standalone, the Job is run in the server hosting Hive and this jar file is sent to the HDFS system of the cluster you are connecting to. Therefore, ensure that you have properly defined the NameNode URI in the corresponding field of the Basic settings view.

This jar file can be downloaded from Apache's site. For further information about how to install an external jar file, see the article Installing external modules on Talend Help Center (https://help.talend.com).

Then from the Compression list that appears, select the compression mode you need to use to handle the Parquet file. The default mode is Uncompressed.

 

Action on file

Select the action to be carried out for writing data.

This list is available only when the target is a Hive table; if the target is a directory, the action to be used is automatically OVERWRITE.

 

Query

This field appears when you have selected INSERT from the Load action list.

Enter the appropriate query for selecting the data to be exported to the specified Hive table or directory.

 

Local

Select this check box to use the Hive LOCAL statement for accessing a local directory. Note that this local directory is actually in the machine in which the Job is run. Therefore, when the connection mode to Hive is Standalone, the Job is run in the machine where the Hive application is installed and thus this local directory is in that machine.

This statement is used along with the directory you have defined in the File path field. Therefore, this Local check box is available only when the File path field is available.

  • If you are using the LOAD action, tHiveLoad copies the local data to the target table.

  • If you are using the INSERT action, tHiveLoad copies data to a local directory.

  • If you leave this Local check box clear, the directory defined in the File path field is assumed to be in the HDFS system to be used and data will be moved to the target location.

For further information about this LOCAL statement, see Apache's documentation about Hive's Language.

  Set partitions

Select this check box to use the Hive Partition clause in loading or inserting data in a Hive table. You need to enter the partition keys and their values to be used in the field that appears.

For example, enter contry='US', state='CA'. This makes a partition clause reading Partition (contry='US', state='CA'), that is to say, a US and CA partition.

Also, it is recommended to select the Create partition if not exist check box that appears to ensure that you will not create a duplicate partition.

 

Die on error

Select this check box to kill the Job when an error occurs.

Advanced settings

Tez lib

Select how the Tez libraries are accessed:

  • Auto install: at runtime, the Job uploads and deploys the Tez libraries provided by the Studio into the directory you specified in the Install folder in HDFS field, for example, /tmp/usr/tez.

    If you have set the tez.lib.uris property in the properties table, this directory overrides the value of that property at runtime. But the other properties set in the properties table are still effective.

  • Use exist: the Job accesses the Tez libraries already deployed in the Hadoop cluster to be used. You need to enter the path pointing to those libraries in the Lib path (folder or file) field.

  • Lib jar: this table appears when you have selected Auto install from the Tez lib list and the distribution you are using is Custom. In this table, you need to add the Tez libraries to be uploaded.

 

Temporary path

If you do not want to set the Jobtracker and the NameNode when you execute the query select * from your_table_name, you need to set this temporary path. For example, /C:/select_all in Windows.

 

Hadoop properties

Talend Studio uses a default configuration for its engine to perform operations in a Hadoop distribution. If you need to use a custom configuration in a specific situation, complete this table with the property or properties to be customized. Then at runtime, the customized property or properties will override those default ones.

  • Note that if you are using the centrally stored metadata from the Repository, this table automatically inherits the properties defined in that metadata and becomes uneditable unless you change the Property type from Repository to Built-in.

For further information about the properties required by Hadoop and its related systems such as HDFS and Hive, see the documentation of the Hadoop distribution you are using or see Apache's Hadoop documentation on http://hadoop.apache.org/docs and then select the version of the documentation you want. For demonstration purposes, the links to some properties are listed below:

 

Hive properties

Talend Studio uses a default configuration for its engine to perform operations in a Hive database. If you need to use a custom configuration in a specific situation, complete this table with the property or properties to be customized. Then at runtime, the customized property or properties will override those default ones. For further information for Hive dedicated properties, see https://cwiki.apache.org/confluence/display/Hive/AdminManual+Configuration.

  • Note that if you are using the centrally stored metadata from the Repository, this table automatically inherits the properties defined in that metadata and becomes uneditable unless you change the Property type from Repository to Built-in.

 

Mapred job map memory mb and Mapred job reduce memory mb

If the Hadoop distribution to be used is Hortonworks Data Platform V1.2 or Hortonworks Data Platform V1.3, you need to set proper memory allocations for the map and reduce computations to be performed by the Hadoop system.

In that situation, you need to enter the values you need in the Mapred job map memory mb and the Mapred job reduce memory mb fields, respectively. By default, the values are both 1000 which are normally appropriate for running the computations.

If the distribution is YARN, then the memory parameters to be set become Map (in Mb), Reduce (in Mb) and ApplicationMaster (in Mb), accordingly. These fields allow you to dynamically allocate memory to the map and the reduce computations and the ApplicationMaster of YARN.

 

Path separator in server

Leave the default value of the Path separator in server as it is, unless you have changed the separator used by your Hadoop distribution's host machine for its PATH variable or in other words, that separator is not a colon (:). In that situation, you must change this value to the one you are using in that host.

 

tStatCatcher Statistics

Select this check box to collect log data at the component level.

Dynamic settings

Click the [+] button to add a row in the table and fill the Code field with a context variable to choose your database connection dynamically from multiple connections planned in your Job. This feature is useful when you need to access database tables having the same data structure but in different databases, especially when you are working in an environment where you cannot change your Job settings, for example, when your Job has to be deployed and executed independent of Talend Studio.

The Dynamic settings table is available only when the Use an existing connection check box is selected in the Basic settings view. Once a dynamic parameter is defined, the Component List box in the Basic settings view becomes unusable.

For examples on using dynamic parameters, see Scenario 3: Reading data from MySQL databases through context-based dynamic connections and Scenario: Reading data from different MySQL databases using dynamically loaded connection parameters. For more information on Dynamic settings and context variables, see Talend Studio User Guide.

Global Variables

QUERY: the query statement being processed. This is a Flow variable and it returns a string.

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.

Usage

This component works standalone and supports writing a wide range of data formats such as RC, ORC or AVRO.

If the Studio used to connect to a Hive database is operated on Windows, you must manually create a folder called tmp in the root of the disk where this Studio is installed.

Prerequisites

The Hadoop distribution must be properly installed, so as to guarantee the interaction with Talend Studio. The following list presents MapR related information for example.

  • Ensure that you have installed the MapR client in the machine where the Studio is, and added the MapR client library to the PATH variable of that machine. According to MapR's documentation, the library or libraries of a MapR client corresponding to each OS version can be found under MAPR_INSTALL\ hadoop\hadoop-VERSION\lib\native. For example, the library for Windows is \lib\native\MapRClient.dll in the MapR client jar file. For further information, see the following link from MapR: http://www.mapr.com/blog/basic-notes-on-configuring-eclipse-as-a-hadoop-development-environment-for-mapr.

    Without adding the specified library or libraries, you may encounter the following error: no MapRClient in java.library.path.

  • Set the -Djava.library.path argument, for example, in the Job Run VM arguments area of the Run/Debug view in the [Preferences] dialog box in the Window menu. This argument provides to the Studio the path to the native library of that MapR client. This allows the subscription-based users to make full use of the Data viewer to view locally in the Studio the data stored in MapR.

For further information about how to install a Hadoop distribution, see the manuals corresponding to the Hadoop distribution you are using.

Log4j

If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Related scenario

For a related scenario, see Scenario: creating a partitioned Hive table