tHiveConnection - 6.3

Talend Components Reference Guide

EnrichVersion
6.3
EnrichProdName
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for Data Quality
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Real-Time Big Data Platform
task
Data Governance
Data Quality and Preparation
Design and Development
EnrichPlatform
Talend Studio

Function

tHiveConnection opens a connection to a Hive database.

Purpose

This component allows you to establish a Hive connection to be reused by other Hive components in your Job.

tHiveConnection properties

Component Family

Big Data / Hive

 

Basic settings

Property type

Either Built-In or Repository.

Built-In: No property data stored centrally.

Repository: Select the repository file where the properties are stored.

Version

Distribution

Select the cluster you are using from the drop-down list. The options in the list vary depending on the component you are using. Among these options, the following ones requires specific configuration:

  • If available in this Distribution drop-down list, the Microsoft HD Insight option allows you to use a Microsoft HD Insight cluster. For this purpose, you need to configure the connections to the WebHCat service, the HD Insight service and the Windows Azure Storage service of that cluster in the areas that are displayed. A demonstration video about how to configure this connection is available in the following link: https://www.youtube.com/watch?v=A3QTT6VsNoM.

  • If you select Amazon EMR, see the article Amazon EMR - Getting Started on about how to configure the connection on Talend Help Center (https://help.talend.com).

  • The Custom option allows you to connect to a cluster different from any of the distributions given in this list, that is to say, to connect to a cluster not officially supported by Talend.

In order to connect to a custom distribution, once selecting Custom, click the button to display the dialog box in which you can alternatively:

  1. Select Import from existing version to import an officially supported distribution as base and then add other required jar files which the base distribution does not provide.

  2. Select Import from zip to import the configuration zip for the custom distribution to be used. This zip file should contain the libraries of the different Hadoop elements and the index file of these libraries.

    In Talend Exchange, members of Talend community have shared some ready-for-use configuration zip files which you can download from this Hadoop configuration list and directly use them in your connection accordingly. However, because of the ongoing evolution of the different Hadoop-related projects, you might not be able to find the configuration zip corresponding to your distribution from this list; then it is recommended to use the Import from existing version option to take an existing distribution as base to add the jars required by your distribution.

    Note that custom versions are not officially supported by Talend. Talend and its community provide you with the opportunity to connect to custom versions from the Studio but cannot guarantee that the configuration of whichever version you choose will be easy, due to the wide range of different Hadoop distributions and versions that are available. As such, you should only attempt to set up such a connection if you have sufficient Hadoop experience to handle any issues on your own.

    Note

    In this dialog box, the active check box must be kept selected so as to import the jar files pertinent to the connection to be created between the custom distribution and this component.

    For a step-by-step example about how to connect to a custom distribution and share this connection, see Connecting to a custom Hadoop distribution.

 

Hive version

Select the version of the Hadoop distribution you are using. The available options vary depending on the component you are using. Along with the evolution of Hadoop, please note the following changes:

  • If you use Hortonworks Data Platform V2.2, the configuration files of your cluster might be using environment variables such as ${hdp.version}. If this is your situation, you need to set the mapreduce.application.framework.path property in the Hadoop properties table of this component with the path value explicitly pointing to the MapReduce framework archive of your cluster. For example:

    mapreduce.application.framework.path=/hdp/apps/2.2.0.0-2041/mapreduce/mapreduce.tar.gz#mr-framework
  • If you use Hortonworks Data Platform V2.0.0, the type of the operating system for running the distribution and a Talend Job must be the same, such as Windows or Linux. Otherwise, you have to use Talend Jobserver to execute the Job in the same type of operating system in which the Hortonworks Data Platform V2.0.0 distribution you are using is run. For further information about Talend Jobserver, see the Talend Installation Guide.

Connection

Connection mode

Select a connection mode from the list. The options vary depending on the distribution you are using.

 

Hive server

Select the Hive server through which you want the Job using this component to execute queries on Hive.

This Hive server list is available only when the Hadoop distribution to be used such as HortonWorks Data Platform V1.2.0 (Bimota) supports HiveServer2. It allows you to select HiveServer2 (Hive 2), the server that better support concurrent connections of multiple clients than HiveServer (Hive 1).

For further information about HiveServer2, see https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2.

 

Host

Database server IP address.

 

Port

DB server listening port.

 

Database

Fill this field with the name of the database.

Note

This field is not available when you select Embedded from the Connection mode list.

 

Username and Password

DB user authentication data.

To enter the password, click the [...] button next to the password field, and then in the pop-up dialog box enter the password between double quotes and click OK to save the settings.

 

Inspect the classpath for configurations

Select this check box to allow the component to check the configuration files in the directory you have set with the $HADOOP_CONF_DIR variable and directly read parameters from these files in this directory. This feature allows you to easily change the Hadoop configuration for the component to switch between different environments, for example, from a test environment to a production environment.

In this situation, the fields or options used to configure Hadoop connection and/or Kerberos security are hidden.

If you want to use certain parameters such as the Kerberos parameters but these parameters are not included in these Hadoop configuration files, you need to create a file called talend-site.xml and put this file into the same directory defined with $HADOOP_CONF_DIR. This talend-site.xml file should read as follows:

<!-- Put site-specific property overrides in this file. --> 
<configuration> 
    <property> 
        <name>talend.kerberos.authentication </name> 
        <value>kinit </value>
         <description> Set the Kerberos authentication method to use. Valid values are: kinit or keytab.  </description> 
    </property> 
    <property> 
        <name>talend.kerberos.keytab.principal </name>
        <value>user@BIGDATA.COM </value>
        <description> Set the keytab's principal name.  </description>
    </property> 
    <property>   
        <name>talend.kerberos.keytab.path </name> 
        <value>/kdc/user.keytab </value> 
        <description> Set the keytab's path.  </description> 
    </property> 
    <property> 
        <name>talend.encryption </name> 
        <value>none </value> 
        <description> Set the encryption method to use. Valid values are: none or ssl.  </description> 
    </property> 
    <property> 
        <name>talend.ssl.trustStore.path </name> 
        <value>ssl </value> 
        <description> Set SSL trust store path.  </description> 
    </property> 
    <property> 
        <name>talend.ssl.trustStore.password </name> 
        <value>ssl </value> 
        <description> Set SSL trust store password.  </description> 
    </property> 
</configuration>

The parameters read from these configuration files override the default ones used by the Studio. When a parameter does not exist in these configuration files, the default one is used.

Note that this option is available only in Hive Standalone mode with Hive 2.

Authentication

Use kerberos authentication

If you are accessing a Hive Metastore running with Kerberos security, select this check box and then enter the relevant parameters in the fields that appear.

  • If this cluster is a MapR cluster of the version 4.0.1 or later, you can set the MapR ticket authentication configuration in addition or as an alternative by following the explanation in Connecting to a security-enabled MapR.

    Keep in mind that this configuration generates a new MapR security ticket for the username defined in the Job in each execution. If you need to reuse an existing ticket issued for the same username, leave both the Force MapR ticket authentication check box and the Use Kerberos authentication check box clear, and then MapR should be able to automatically find that ticket on the fly.

The values of the following parameters can be found in the hive-site.xml file of the Hive system to be used.

  1. Hive principal uses the value of hive.metastore.kerberos.principal. This is the service principal of the Hive Metastore.

  2. HiveServer2 local user principal uses the value of hive.server2.authentication.kerberos.principal.

  3. HiveServer2 local user keytab uses the value of hive.server2.authentication.kerberos.keytab

  4. Metastore URL uses the value of javax.jdo.option.ConnectionURL. This is the JDBC connection string to the Hive Metastore.

  5. Driver class uses the value of javax.jdo.option.ConnectionDriverName. This is the name of the driver for the JDBC connection.

  6. Username uses the value of javax.jdo.option.ConnectionUserName. This, as well as the Password parameter, is the user credential for connecting to the Hive Metastore.

  7. Password uses the value of javax.jdo.option.ConnectionPassword.

For the other parameters that are displayed, please consult the Hadoop configuration files they belong to. For example, the Namenode principal can be found in the hdfs-site.xml file or the hdfs-default.xml file of the distribution you are using.

This check box is available depending on the Hadoop distribution you are connecting to.

  Use a keytab to authenticate

Select the Use a keytab to authenticate check box to log into a Kerberos-enabled Hadoop system using a given keytab file. A keytab file contains pairs of Kerberos principals and encrypted keys. You need to enter the principal to be used in the Principal field and the access path to the keytab file itself in the Keytab field.

Note that the user that executes a keytab-enabled Job is not necessarily the one a principal designates but must have the right to read the keytab file being used. For example, the user name you are using to execute a Job is user1 and the principal to be used is guest; in this situation, ensure that user1 has the right to read the keytab file to be used.

 

Use SSL encryption

Select this check box to enable the SSL or TLS encrypted connection.

Then in the fields that are displayed, provide the authentication information:

  • In the Trust store path field, enter the path, or browse to the TrustStore file to be used. By default, the supported TrustStore types are JKS and PKCS 12.

  • To enter the password, click the [...] button next to the password field, and then in the pop-up dialog box enter the password between double quotes and click OK to save the settings.

This feature is available only to the HiveServer2 in the Standalone mode of the following distributions:

  • Hortonworks Data Platform 2.0 +

  • Cloudera CDH4 +

  • Pivotal HD 2.0 +

  • Amazon EMR 4.0.0 +

Hadoop properties

Set Jobtracker URI

Select this check box to indicate the location of the Jobtracker service within the Hadoop cluster to be used. For example, we assume that you have chosen a machine called machine1 as the JobTracker, then set its location as machine1:portnumber. A Jobtracker is the service that assigns Map/Reduce tasks to specific nodes in a Hadoop cluster. Note that the notion job in this term JobTracker does not designate a Talend Job, but rather a Hadoop job described as MR or MapReduce job in Apache's Hadoop documentation on http://hadoop.apache.org.

This property is required when the query you want to use is executed in Windows and it is a Select query. For example, SELECT your_column_name FROM your_table_name

If you use YARN in your Hadoop cluster such as Hortonworks Data Platform V2.0.0 or Cloudera CDH4.3 + (YARN mode), you need to specify the location of the Resource Manager instead of the Jobtracker. Then you can continue to set the following parameters depending on the configuration of the Hadoop cluster to be used (if you leave the check box of a parameter clear, then at runtime, the configuration about this parameter in the Hadoop cluster to be used will be ignored ):

  1. Select the Set resourcemanager scheduler address check box and enter the Scheduler address in the field that appears.

  2. Select the Set jobhistory address check box and enter the location of the JobHistory server of the Hadoop cluster to be used. This allows the metrics information of the current Job to be stored in that JobHistory server.

  3. Select the Set staging directory check box and enter this directory defined in your Hadoop cluster for temporary files created by running programs. Typically, this directory can be found under the yarn.app.mapreduce.am.staging-dir property in the configuration files such as yarn-site.xml or mapred-site.xml of your distribution.

  4. Allocate proper memory volumes to the Map and the Reduce computations and the ApplicationMaster of YARN by selecting the Set memory check box in the Advanced settings view.

  5. Select the Set Hadoop user check box and enter the user name under which you want to execute the Job. Since a file or a directory in Hadoop has its specific owner with appropriate read or write rights, this field allows you to execute the Job directly under the user name that has the appropriate rights to access the file or directory to be processed.

  6. Select the Use datanode hostname check box to allow the Job to access datanodes via their hostnames. This actually sets the dfs.client.use.datanode.hostname property to true. When connecting to a S3N filesystem, you must select this check box.

For further information about these parameters, see the documentation or contact the administrator of the Hadoop cluster to be used.

For further information about the Hadoop Map/Reduce framework, see the Map/Reduce tutorial in Apache's Hadoop documentation on http://hadoop.apache.org.

 

Set NameNode URI

Select this check box to indicate the location of the NameNode of the Hadoop cluster to be used. The NameNode is the master node of a Hadoop cluster. For example, we assume that you have chosen a machine called masternode as the NameNode of an Apache Hadoop distribution, then the location is hdfs://masternode:portnumber.

This property is required when the query you want to use is executed in Windows and it is a Select query. For example, SELECT your_column_name FROM your_table_name

For further information about the Hadoop Map/Reduce framework, see the Map/Reduce tutorial in Apache's Hadoop documentation on http://hadoop.apache.org.

Microsoft HD Insight properties

WebHCat configuration

Enter the address and the authentication information of the WebHCat service of the Microsoft HD Insight cluster to be used. The Studio uses this service to submit the Job to the HD Insight cluster.

In the Job result folder field, enter the location in which you want to store the execution result of a Job in the Azure Storage to be used.

 

HDInsight configuration

Enter the authentication information of the HD Insight cluster to be used.

 

Windows Azure Storage configuration

Enter the address and the authentication information of the Azure Storage account to be used.

In the Container field, enter the name of the container to be used.

In the Deployment Blob field, enter the location in which you want to store the current Job and its dependent libraries in this Azure Storage account.

 

Use or register a shared DB Connection

Select this check box to share your connection or fetch a connection shared by a parent or child Job. This allows you to share one single DB connection among several DB connection components from different Job levels that can be either parent or child.

Warning

This option is incompatible with the Use dynamic job and Use an independent process to run subjob options of the tRunJob component. Using a shared connection together with a tRunJob component with either of these two options enabled will cause your Job to fail.

Shared DB Connection Name: set or type in the shared connection name.

 

Execution engine

Select this check box and from the drop-down list, select the framework you need to use to run the Job.

This list is available only when you are using the Embedded mode for the Hive connection and the distribution you are working with is among the following ones:

  • Hortonworks: V2.1 and V2.2.

  • MapR: V4.0.1.

  • Custom: this option allows you connect to a distribution supporting Tez but not officially supported by Talend.

Before using Tez, ensure that the Hadoop cluster you are using supports Tez. You will need to configure the access to the relevant Tez libraries via the Advanced settings view of this component.

For further information about Hive on Tez, see Apache's related documentation in https://cwiki.apache.org/confluence/display/Hive/Hive+on+Tez. Some examples are presented there to show how Tez can be used to gain performance over MapReduce.

HBase Configuration

Store by HBase

Select this check box to display the parameters to be set to allow the Hive components to access HBase tables:

  • Once this access is configured, you will be able to use, in tHiveRow and tHiveInput, the Hive QL statements to read and write data in HBase.

  • If you are using the Kerberos authentication, you need to define the HBase related principals in the corresponding fields that are displayed.

For further information about this access involving Hive and HBase, see Apache's Hive documentation about Hive/HBase integration.

 

Zookeeper quorum

Type in the name or the URL of the Zookeeper service you use to coordinate the transaction between your Studio and your database. Note that when you configure the Zookeeper, you might need to explicitly set the zookeeper.znode.parent property to define the path to the root znode that contains all the znodes created and used by your database; then select the Set Zookeeper znode parent check box to define this property.

 

Zookeeper client port

Type in the number of the client listening port of the Zookeeper service you are using.

 

Define the jars to register for HBase

Select this check box to display the Register jar for HBase table, in which you can register any missing jar file required by HBase, for example, the Hive Storage Handler, by default, registered along with your Hive installation.

  Register jar for HBase

Click the button to add rows to this table, then, in the Jar name column, select the jar file(s) to be registered and in the Jar path column, enter the path(s) pointing to that or those jar file(s).

Advanced settings

Tez lib

Select how the Tez libraries are accessed:

  • Auto install: at runtime, the Job uploads and deploys the Tez libraries provided by the Studio into the directory you specified in the Install folder in HDFS field, for example, /tmp/usr/tez.

    If you have set the tez.lib.uris property in the properties table, this directory overrides the value of that property at runtime. But the other properties set in the properties table are still effective.

  • Use exist: the Job accesses the Tez libraries already deployed in the Hadoop cluster to be used. You need to enter the path pointing to those libraries in the Lib path (folder or file) field.

  • Lib jar: this table appears when you have selected Auto install from the Tez lib list and the distribution you are using is Custom. In this table, you need to add the Tez libraries to be uploaded.

 

Hadoop properties

Talend Studio uses a default configuration for its engine to perform operations in a Hadoop distribution. If you need to use a custom configuration in a specific situation, complete this table with the property or properties to be customized. Then at runtime, the customized property or properties will override those default ones.

  • Note that if you are using the centrally stored metadata from the Repository, this table automatically inherits the properties defined in that metadata and becomes uneditable unless you change the Property type from Repository to Built-in.

For further information about the properties required by Hadoop and its related systems such as HDFS and Hive, see the documentation of the Hadoop distribution you are using or see Apache's Hadoop documentation on http://hadoop.apache.org/docs and then select the version of the documentation you want. For demonstration purposes, the links to some properties are listed below:

 

Hive properties

Talend Studio uses a default configuration for its engine to perform operations in a Hive database. If you need to use a custom configuration in a specific situation, complete this table with the property or properties to be customized. Then at runtime, the customized property or properties will override those default ones. For further information for Hive dedicated properties, see https://cwiki.apache.org/confluence/display/Hive/AdminManual+Configuration.

  • Note that if you are using the centrally stored metadata from the Repository, this table automatically inherits the properties defined in that metadata and becomes uneditable unless you change the Property type from Repository to Built-in.

 

Mapred job map memory mb and Mapred job reduce memory mb

If the Hadoop distribution to be used is Hortonworks Data Platform V1.2 or Hortonworks Data Platform V1.3, you need to set proper memory allocations for the map and reduce computations to be performed by the Hadoop system.

In that situation, you need to enter the values you need in the Mapred job map memory mb and the Mapred job reduce memory mb fields, respectively. By default, the values are both 1000 which are normally appropriate for running the computations.

 

Path separator in server

Leave the default value of the Path separator in server as it is, unless you have changed the separator used by your Hadoop distribution's host machine for its PATH variable or in other words, that separator is not a colon (:). In that situation, you must change this value to the one you are using in that host.

tStatCatcher Statistics

Select this check box to collect the log data at a component level.

Global Variables

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.

Usage

This component is generally used with other Hive components, particularly tHiveClose.

If the Studio used to connect to a Hive database is operated on Windows, you must manually create a folder called tmp in the root of the disk where this Studio is installed.

Prerequisites

The Hadoop distribution must be properly installed, so as to guarantee the interaction with Talend Studio. The following list presents MapR related information for example.

  • Ensure that you have installed the MapR client in the machine where the Studio is, and added the MapR client library to the PATH variable of that machine. According to MapR's documentation, the library or libraries of a MapR client corresponding to each OS version can be found under MAPR_INSTALL\ hadoop\hadoop-VERSION\lib\native. For example, the library for Windows is \lib\native\MapRClient.dll in the MapR client jar file. For further information, see the following link from MapR: http://www.mapr.com/blog/basic-notes-on-configuring-eclipse-as-a-hadoop-development-environment-for-mapr.

    Without adding the specified library or libraries, you may encounter the following error: no MapRClient in java.library.path.

  • Set the -Djava.library.path argument, for example, in the Job Run VM arguments area of the Run/Debug view in the [Preferences] dialog box in the Window menu. This argument provides to the Studio the path to the native library of that MapR client. This allows the subscription-based users to make full use of the Data viewer to view locally in the Studio the data stored in MapR.

For further information about how to install a Hadoop distribution, see the manuals corresponding to the Hadoop distribution you are using.

Log4j

If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Limitation

n/a

Connecting to a custom Hadoop distribution

As explained in the properties table, when you select the Custom option from the Distribution drop-down list, you are connecting to a Hadoop distribution different from any of the Hadoop distributions provided on that Distribution list in the Studio.

After selecting this Custom option, click the button to display the [Import custom definition] dialog box and proceed as follows:

  1. Depending on your situation, select Import from existing version or Import from zip to configure the custom Hadoop distribution to be connected to.

    • If you have the zip file of the custom Hadoop distribution you need to connect to, select Import from zip. Talend community provides this kind of zip files that you can download from http://www.talendforge.org/exchange/index.php.

    • Otherwise, select Import from existing version to import an officially supported Hadoop distribution as base so as to customize it by following the wizard.

    Note that the check boxes in the wizard allow you to select the Hadoop element(s) you need to import. All the check boxes are not always displayed in your wizard depending on the context in which you are creating the connection. For example, if you are creating this connection for a Hive component, then only the Hive check box appears.

  2. Whether you have selected Import from existing version or Import from zip, verify that each check box next to the Hadoop element you need to import has been selected..

  3. Click OK and then in the pop-up warning, click Yes to accept overwriting any custom setup of jar files previously implemented.

    Once done, the [Custom Hadoop version definition] dialog box becomes active.

    This dialog box lists the Hadoop elements and their jar files you are importing.

  4. If you have selected Import from zip, click OK to validate the imported configuration.

    If you have selected Import from existing version as base, you should still need to add more jar files to customize that version. Then from the tab of the Hadoop element you need to customize, for example, the HDFS/HCatalog/Oozie tab, click the [+] button to open the [Select libraries] dialog box.

  5. Select the External libraries option to open its view.

  6. Browse to and select any jar file you need to import.

  7. Click OK to validate the changes and to close the [Select libraries] dialog box.

    Once done, the selected jar file appears on the list in the tab of the Hadoop element being configured.

    Note that if you need to share the custom Hadoop setup with another Studio, you can export this custom connection from the [Custom Hadoop version definition] window using the button.

  8. In the [Custom Hadoop version definition] dialog box, click OK to validate the customized configuration. This brings you back to the Distribution list in the Basic settings view of the component.

Now that the configuration of the custom Hadoop version has been set up and you are back to the Distribution list, you are able to continue to enter other parameters required by the connection.

If the custom Hadoop version you need to connect to contains YARN and you want to use it, select the Use YARN check box next to the Distribution list.

A video is available in the following link to demonstrate, by taking HDFS as example, how to set up the connection to a custom Hadoop cluster, also referred to as an unsupported Hadoop distribution: How to add an unsupported Hadoop distribution to the Studio.

Scenario: creating a partitioned Hive table

This scenario illustrates how to use tHiveConnection, tHiveCreateTable and tHiveLoad to create a partitioned Hive table and write data in it.

Note that tHiveCreateTable and tHiveLoad are available only when you are using one of the Talend solutions with Big Data.

The sample data to be used in this scenario is employee information of a company, reading as follows:

1;Lyndon;Fillmore;21-05-2008;US
2;Ronald;McKinley;15-08-2008
3;Ulysses;Roosevelt;05-10-2008
4;Harry;Harrison;23-11-2007
5;Lyndon;Garfield;19-07-2007
6;James;Quincy;15-07-2008
7;Chester;Jackson;26-02-2008
8;Dwight;McKinley;16-07-2008
9;Jimmy;Johnson;23-12-2007
10;Herbert;Fillmore;03-04-2008
				

The information contains some employees' names and the dates when they are registered in a HR system. Since these employees work for the US subsidiary of the company, you will create a US partition for this sample data.

Before starting to replicate this scenario, ensure that you have appropriate rights and permissions to access the Hive database to be used.

Note that if you are using the Windows operating system, you have to create a tmp folder at the root of the disk where the Studio is installed.

Then proceed as follows:

Linking the components

  1. In the Integration perspective of the Studio, create an empty Job from the Job Designs node in the Repository tree view.

    For further information about how to create a Job, see the chapter describing how to designing a Job in Talend Studio User Guide.

  2. Drop tHiveConnection, tHiveCreateTable and tHiveLoad onto the workspace.

  3. Connect them using the Trigger > On Subjob OK link.

Configuring the connection to Hive

Configuring tHiveConnection

  1. Double-click tHiveConnection to open its Component view.

  2. From the Property type list, select Built-in. If you have created the connection to be used in Repository, then select Repository, click the button to open the [Repository content] dialog box and select that connection. This way, the Studio will reuse that set of connection information for this Job.

    For further information about how to create an Hadoop connection in Repository, see the chapter describing the Hadoop cluster node of the Talend Big Data Getting Started Guide.

  3. In the Version area, select the Hadoop distribution to be used and its version. If you cannot find from the list the distribution corresponding to yours, select Custom so as to connect to a Hadoop distribution not officially supported in the Studio.

    For a step-by-step example about how to use this Custom option, see Connecting to a custom Hadoop distribution.

  4. In the Connection area, enter the connection parameters to the Hive database to be used.

  5. In the Name node field, enter the location of the master node, the NameNode, of the distribution to be used. For example, talend-hdp-all:50300.

  6. In the Job tracker field, enter the location of the JobTracker of your distribution. For example, hdfs://talend-hdp-all:8020.

    Note that the notion Job in this term JobTracker designates the MR or the MapReduce jobs described in Apache's documentation on http://hadoop.apache.org/.

Creating the Hive table

Defining the schema

  1. Double-click tHiveCreateTable to open its Component view.

  2. Select the Use an existing connection check box and from Component list, select the connection configured in the tHiveConnection component you are using for this Job.

  3. Click the button next to Edit schema to open the schema editor.

  4. Click the button four times to add four rows and in the Column column, rename them to Id, FirstName, LastName and Reg_date, respectively.

    Note that you cannot use the Hive reserved keywords to name the columns, such as location or date.

  5. In the Type column, select the type of the data in each column. In this scenario, Id is of the Integer type, Reg_date is of the Date type and the others are of the String type.

  6. In the DB type column, select the Hive type of each column corresponding to their data types you have defined. For example, Id is of INT and Reg_date is of TIMESTAMP.

  7. In the Data pattern column, define the pattern corresponding to that of the raw data. In this example, use the default one.

  8. Click OK to validate these changes.

Defining the table settings

  1. In Table name field, enter the name of the Hive table to be created. In this scenario, it is employees.

  2. From the Action on table list, select Create table if not exists.

  3. From the Format list, select the data format that this Hive table in question is created for. In this scenario, it is TEXTFILE.

  4. Select the Set partitions check box to add the US partition as explained at the beginning of this scenario. To define this partition, click the button next to Edit schema that appears.

  5. Leave the Set file location check box clear to use the default path for Hive table.

  6. Select the Set Delimited row format check box to display the available options of row format.

  7. Select the Field check box and enter a semicolon (;) as field separator in the field that appears.

  8. Select the Line check box and leave the default value as line separator.

Writing data to the table

Configuring tHiveLoad

  1. Double-click tHiveLoad to open its Component view.

  2. Select the Use an existing connection check box and from Component list, select the connection configured in the tHiveConnection component you are using for this Job.

  3. From the Load action field, select LOAD to write data from the file holding the sample data that is presented at the beginning of this scenario.

  4. In the File path field, enter the directory where the sample data is stored. In this example, the data is stored in the HDFS system to be used. In the real-world practice, you can use tHDFSOutput to write data into the HDFS system and you need to ensure that the Hive application has the appropriate rights and permissions to read or even move the data.

    For further information about tHDFSOutput, see tHDFSOutput; for further information about the related rights and permissions, see the documentation or contact the administrator of the Hadoop cluster to be used.

    Note if you need to read data from a local file system other than the HDFS system, ensure that the data to be read is stored in the local file system of the machine in which the Job is run and then select the Local check box in this Basic settings view. For example, when the connection mode to Hive is Standalone, the Job is run in the machine where the Hive application is installed and thus the data should be stored in that machine.

  5. In the Table name field, enter the name of the target table you need to load data in. In this scenario, it is employees.

  6. From the Action on file list, select APPEND.

  7. Select the Set partitions check box and in the field that appears, enter the partition you need to add data to. In this scenario, this partition is country='US'.

Executing the Job

Then you can press F6 to run this Job.

Once done, the Run view is opened automatically, where you can check the execution process.

You can as well verify the results in the web console of the Hadoop distribution used.

If you need to obtain more details about the Job, it is recommended to use the web console of the Jobtracker provided by the Hadoop distribution you are using.