tHCatalogOperation - 6.1

Talend Components Reference Guide

EnrichVersion
6.1
EnrichProdName
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for Data Quality
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Real-Time Big Data Platform
task
Data Governance
Data Quality and Preparation
Design and Development
EnrichPlatform
Talend Studio

Warning

This component will be available in the Palette of Talend Studio on the condition that you have subscribed to one of the Talend solutions with Big Data.

tHCatalogOperation Properties

Component family

Big Data / HCatalog

 

Function

This component allows you to manage the data stored in HCatalog managed Hive database/table/partition.

Purpose

The tHCatalogOperation component prepares the HCatalog managed database/table/partition to be processed.

Basic settings

Property type

Either Built-in or Repository

Built-in: No property data stored centrally.

Repository: Select the repository file in which the properties are stored. The fields that follow are completed automatically using the data retrieved.

Version

Distribution

Select the cluster you are using from the drop-down list. The options in the list vary depending on the component you are using. Among these options, the following ones requires specific configuration:

  • If available in this Distribution drop-down list, the Microsoft HD Insight option allows you to use a Microsoft HD Insight cluster. For this purpose, you need to configure the connections to the WebHCat service, the HD Insight service and the Windows Azure Storage service of that cluster in the areas that are displayed. A demonstration video about how to configure this connection is available in the following link: https://www.youtube.com/watch?v=A3QTT6VsNoM.

  • The Custom option allows you to connect to a cluster different from any of the distributions given in this list, that is to say, to connect to a cluster not officially supported by Talend.

In order to connect to a custom distribution, once selecting Custom, click the button to display the dialog box in which you can alternatively:

  1. Select Import from existing version to import an officially supported distribution as base and then add other required jar files which the base distribution does not provide.

  2. Select Import from zip to import the configuration zip for the custom distribution to be used. This zip file should contain the libraries of the different Hadoop elements and the index file of these libraries.

    In Talend Exchange, members of Talend community have shared some ready-for-use configuration zip files which you can download from this Hadoop configuration list and directly use them in your connection accordingly. However, because of the ongoing evolution of the different Hadoop-related projects, you might not be able to find the configuration zip corresponding to your distribution from this list; then it is recommended to use the Import from existing version option to take an existing distribution as base to add the jars required by your distribution.

    Note that custom versions are not officially supported by Talend. Talend and its community provide you with the opportunity to connect to custom versions from the Studio but cannot guarantee that the configuration of whichever version you choose will be easy, due to the wide range of different Hadoop distributions and versions that are available. As such, you should only attempt to set up such a connection if you have sufficient Hadoop experience to handle any issues on your own.

    Note

    In this dialog box, the active check box must be kept selected so as to import the jar files pertinent to the connection to be created between the custom distribution and this component.

    For a step-by-step example about how to connect to a custom distribution and share this connection, see Connecting to a custom Hadoop distribution.

 

HCatalog version

Select the version of the Hadoop distribution you are using. The available options vary depending on the component you are using. Along with the evolution of Hadoop, please note the following changes:

  • If you use Hortonworks Data Platform V2.2, the configuration files of your cluster might be using environment variables such as ${hdp.version}. If this is your situation, you need to set the mapreduce.application.framework.path property in the Hadoop properties table of this component with the path value explicitly pointing to the MapReduce framework archive of your cluster. For example:

    mapreduce.application.framework.path=/hdp/apps/2.2.0.0-2041/mapreduce/mapreduce.tar.gz#mr-framework
  • If you use Hortonworks Data Platform V2.0.0, the type of the operating system for running the distribution and a Talend Job must be the same, such as Windows or Linux. Otherwise, you have to use Talend Jobserver to execute the Job in the same type of operating system in which the Hortonworks Data Platform V2.0.0 distribution you are using is run. For further information about Talend Jobserver, see Talend Installation Guide.

Templeton Configuration

Templeton hostname

Fill this field with the URL of Templeton Webservice.

Note

Templeton is a webservice API for HCatalog. It has been renamed to WebHCat by the Apache community. This service facilitates the access to HCatalog and the related Hadoop elements such as Pig. For further information about Templeton (WebHCat), see https://cwiki.apache.org/confluence/display/Hive/WebHCat+UsingWebHCat.

 

Templeton port

Fill this field with the port of URL of Templeton Webservice. By default, the value for this field is 50111.

Note

Templeton is a webservice API for HCatalog. It has been renamed to WebHCat by the Apache community. This service facilitates the access to HCatalog and the related Hadoop elements such as Pig. For further information about Templeton (WebHCat), see https://cwiki.apache.org/confluence/display/Hive/WebHCat+UsingWebHCat.

 

Use kerberos authentication

If you are accessing the Hadoop cluster running with Kerberos security, select this check box, then, enter the Kerberos principal name for the NameNode in the field displayed. This enables you to use your user name to authenticate against the credentials stored in Kerberos.

This check box is available depending on the Hadoop distribution you are connecting to.

  Use a keytab to authenticate

Select the Use a keytab to authenticate check box to log into a Kerberos-enabled Hadoop system using a given keytab file. A keytab file contains pairs of Kerberos principals and encrypted keys. You need to enter the principal to be used in the Principal field and the access path to the keytab file itself in the Keytab field.

Note that the user that executes a keytab-enabled Job is not necessarily the one a principal designates but must have the right to read the keytab file being used. For example, the user name you are using to execute a Job is user1 and the principal to be used is guest; in this situation, ensure that user1 has the right to read the keytab file to be used.

 

Operation on

Select an object from the list for the DB operation as follows:

Database: The HCatalog managed database in HDFS.

Table: The HCatalog managed table in HDFS.

Partition: The partition specified by the user.

 

Operation

Select an action from the list for the DB operation. For further information about the DB operation in HDFS, see https://cwiki.apache.org/Hive/.

  Create the table only it doesn't exist already

Select this check box to avoid creating duplicate table when you create a table.

Note

This check box is enabled only when you have selected Table from the Operation on list.

HCatalog Configuration

Database

Fill this field with the name of the database in which the HCatalog managed tables are placed.

 

Table

Fill this field to operate on one or multiple tables in a database or on a specified HDFS location.

Note

This field is enabled only when you have selected Table from the Operation on list. For further information about the operation on Table, see https://cwiki.apache.org/Hive/.

 

Partition

Fill this field to specify one or more partitions for the partition operation on a specified table. When you specify multiple partitions, use comma to separate every two partitions and use double quotation marks to quote the partition string.

If you are reading a non-partitioned table, leave this field empty.

Note

This field is enabled only when you select Partition from the Operation on list. For further information about the operation on Partition, see https://cwiki.apache.org/Hive/.

  Username

Fill this field with the username for the DB authentication.

 

Database location

Fill this field with the location of the database file in HDFS.

Note

This field is enabled only when you select Database from the Operation on list.

 

Database description

The description for the database to be created.

Note

This field is enabled only when you select Database from the Operation on list.

  Create an external table

Select this field to create an external table in an alternative path defined in the Set HDFS location field in the Advanced settings view. For further information about creating external table, see https://cwiki.apache.org/Hive/.

Note

This check box is enabled only when you select Table from the Operation on list and Create/Drop and create/Drop if exist and create from the Operation list.

  Format

Select a file format from the list to specify the format of the external table you want to create:

TEXTFILE: Plain text files.

RCFILE: Record Columnar files. For further information about RCFILE, see http://hive.apache.org/javadocs/r0.10.0/api/org/apache/hadoop/hive/ql/io/RCFile.html.

Note

RCFILE is only available starting with Hive 0.6.0. This list is enabled only when you select Table from the Operation on list and Create/Drop and create/Drop if exist and create from the Operation list.

  Set partitions

Select this check box to set the partition schema by clicking the Edit schema to the right of Set partitions check box. The partition schema is either built-in or remote in the Repository.

Note

This check box is enabled only when you select Table from the Operation on list and Create/Drop and create/Drop if exist and create from the Operation list. You must follow the rules of using partition schema in HCatalog managed tables. For more information about the rules in using partition schema, see https://cwiki.apache.org/confluence/display/Hive/HCatalog.

 

 

Built-in: The schema will be created and stored locally for this component only. Related topic: see Talend Studio User Guide.

 

 

Repository: The schema already exists and is stored in the Repository, hence can be reused in various projects and Job designs. Related topic: see Talend Studio User Guide.

  Set the user group to use

Select this check box to specify the user group.

Note

This check box is enabled only when you select Drop/Drop if exist/Drop and create/Drop if exist and create from the Operation list. By default, the value for this field is root. For more information about the user group in the server, contact your system administrator.

  Option

Select a clause when you drop a database.

Note

This list is enabled only when you select Database from the Operation on list and Drop/Drop if exist/Drop and create/Drop if exist and create from the Operation list. For more information about Drop operation on database, see https://cwiki.apache.org/Hive/.

  Set the permissions to use

Select this check box to specify the permissions needed by the operation you select from the Operation list.

Note

This check box is enabled only when you select Drop/Drop if exist/Drop and create/Drop if exist and create from the Operation list. By default, the value for this field is rwxrw-r-x. For more information on user permissions, contact your system administrator.

  Set File location

Enter the directory in which partitioned data is stored.

Note

This check box is enabled only when you select Partition from the Operation on list and Create/Drop and create/Drop if exist and create from the Operation list. For further information about storing partitioned data in HDFS, see https://cwiki.apache.org/Hive/.

 

Die on error

This check box is cleared by default, meaning to skip the row on error and to complete the process for error-free rows.

Advanced settings Comment

Fill this field with the comment for the table you want to create.

Note

This field is enabled only when you select Table from the Operation on list and Create/Drop and create/Drop if exist and create from the Operation list in the Basic settings view.

  Set HDFS location

Select this check box to specify an HDFS location to which the table you want to create is saved. Deselect it to save the table you want to create in the warehouse directory defined in the key hive.metastore.warehouse.dir in Hive configuration file hive-site.xml.

Note

This check box is enabled only when you select Table from the Operation on list and Create/Drop and create/Drop if exist and create from the Operation list in the Basic settings view. For further information about saving data in HDFS, see https://cwiki.apache.org/Hive/.

  Set row format(terminated by)

Select this check box to use and define the row formats when you want to create a table:

Field: Select this check box to use Field as the row format. The default value for this field is "\u0001". You can also specify a customized char in this field.

Collection Item: Select this check box to use Collection Item as the row format. The default value for this field is "\u0002". You can also specify a customized char in this field.

Map Key: Select this check box to use Map Key as the row format. The default value for this field is "\u0003". You can also specify a customized char in this field.

Line: Select this check box to use Line as the row format. The default value for this field is "\n". You can also specify a customized char in this field.

Note

This check box is enabled only when you select Table from the Operation on list and Create/Drop and create/Drop if exist and create from the Operation list in the Basic settings view. For further information about row formats in the HCatalog managed table, see https://cwiki.apache.org/Hive/.

  Properties

Click [+] to add one or more lines to define table properties. The table properties allow you to tag the table definition with your own metadata key/value pairs. Make sure that values in both Key row and Value row must be quoted in double quotation marks.

Note

This table is enabled only when you select Database/Table from the Operation on list and Create/Drop and create/Drop if exist and create from the Operation list in the Basic settings view. For further information about table properties, see https://cwiki.apache.org/Hive/.

Retrieve the HCatalog logs Select this check box to retrieve log files generated during HCatalog operations.
  Standard Output Folder

Browse to, or enter the directory where the log files are stored.

Note

This field is enabled only when you selected Retrieve the HCatalog logs check box.

 

Error Output Folder

Browse to, or enter the directory where the error log files are stored.

Note

This field is enabled only when you selected Retrieve the HCatalog logs check box.

 

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at the Job level as well as at each component level.

Global Variables

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.

Usage

This component is commonly used in a single-component subjob.

HCatalog is built on top of the Hive metastore to provide read and write interface for Pig and MapReduce, so that the latter systems can use the metadata of Hive to easily read and write data in HDFS.

For further information, see Apache documentation about HCatalog: https://cwiki.apache.org/confluence/display/Hive/HCatalog.

Prerequisites

The Hadoop distribution must be properly installed, so as to guarantee the interaction with Talend Studio. The following list presents MapR related information for example.

  • Ensure that you have installed the MapR client in the machine where the Studio is, and added the MapR client library to the PATH variable of that machine. According to MapR's documentation, the library or libraries of a MapR client corresponding to each OS version can be found under MAPR_INSTALL\ hadoop\hadoop-VERSION\lib\native. For example, the library for Windows is \lib\native\MapRClient.dll in the MapR client jar file. For further information, see the following link from MapR: http://www.mapr.com/blog/basic-notes-on-configuring-eclipse-as-a-hadoop-development-environment-for-mapr.

    Without adding the specified library or libraries, you may encounter the following error: no MapRClient in java.library.path.

  • Set the -Djava.library.path argument, for example, in the Job Run VM arguments area of the Run/Debug view in the [Preferences] dialog box. This argument provides to the Studio the path to the native library of that MapR client. This allows the subscription-based users to make full use of the Data viewer to view locally in the Studio the data stored in MapR. For further information about how to set this argument, see the section describing how to view data of Talend Big Data Getting Started Guide.

For further information about how to install a Hadoop distribution, see the manuals corresponding to the Hadoop distribution you are using.

Log4j

If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Limitation

When Use kerberos authentication is selected, the component cannot work with IBM JVM.

Scenario: HCatalog table management on Hortonworks Data Platform

This scenario describes a six-component Job that includes the common operations for the HCatalog table management on Hortonworks Data Platform. Sub-sections in this scenario covers DB operations including:

  • Creating a table to the database in HDFS;

  • Writing data to the HCatalog managed table;

  • Writing data to the partitioned table using tHCatalogLoad;

  • Reading data from the HCatalog managed table;

  • Outputting the data read from the table in HDFS.

Note

Knowledge of Hive Data Definition Language and HCatalog Data Definition Language is required. For further information about Hive Data Definition Language, see https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL. For further information about HCatalog Data Definition Language, see https://cwiki.apache.org/confluence/display/HCATALOG/Design+Document+-+Java+APIs+for+HCatalog+DDL+Commands.

Setting up the Job

  1. Drop the following components from the Palette to the design workspace: tHCatalogOperation, tHCatalogLoad, tHCatalogInput, tHCatalogOutput, tFixedFlowInput, and tFileOutputDelimited.

  2. Right-click tHCatalogOperation to connect it to tFixedFlowInput component using a Trigger>OnSubjobOk connection.

  3. Right-click tFixedFlowInput to connect it to tHCatalogOutput using a Row > Main connection.

  4. Right-click tFixedFlowInput to connect it to tHCatalogLoad using a Trigger > OnSubjobOk connection.

  5. Right-click tHCatalogLoad to connect it to the tHCatalogInput component using a Trigger > OnSubjobOk connection.

  6. Right-click tHCatalogInput to connect it to tFileOutputDelimited using a Row > Main connection.

Creating a table in HDFS

  1. Double-click tHCatalogOperation to open its Basic settings view.

  2. Click Edit schema to define the schema for the table to be created.

  3. Click [+] to add at least one column to the schema and click OK when you finish setting the schema. In this scenario, the columns added to the schema are: name, country and age.

  4. Fill the Templeton hostname field with URL of the Templeton webservice you are using. In this scenario, fill this field with "192.168.0.131".

  5. Fill the Templeton port field with the port for Templeton hostname. By default, the value for this field is "50111"

  6. Select Table from the Operation on list and Drop if exist and create from the Operation list to create a table in HDFS.

  7. Fill the Database field with an existing database name in HDFS. In this scenario, the database name is "talend".

  8. Fill the Table field with the name of the table to be created. In this scenario, the table name is "Customer".

  9. Fill the Username field with the username for the DB authentication.

  10. Select the Set the user group to use check box to specify the user group. The default user group is "root", you need to specify the value for this field according to real practice.

  11. Select the Set the permissions to use check box to specify the user permission. The default value for this field is "rwxrwxr-x".

  12. Select the Set partitions check box to enable the partition schema.

  13. Click the Edit schema button next to the Set partitions check box to define the partition schema.

  14. Click [+] to add one column to the schema and click OK when you finish setting the schema. In this scenario, the column added to the partition schema is: match_age.

Writing data to the existing table

  1. Double-click tFixedFlowInput to open its Basic settings view.

  2. Click Edit schema to define a same schema as the one you defined in tHCatalogOperation.

  3. Fill the Number of rows field with integer 8.

  4. Select Use Inline Table in the Mode area.

  5. Click [+] to add new lines in the inline table.

  6. Double-click tHCatalogOutput to open its Basic settings view.

  7. Click Sync columns to retrieve the schema defined in the preceding component.

  8. Fill the NameNode URI field with the URI to the NameNode. In this scenario, this URL is "192.168.0.131".

  9. Fill the File name field with the HDFS location of the file you write data to. In this scenario, the file location is "/user/hdp/Customer/Customer.csv".

  10. Select Overwrite from the Action list.

  11. Fill the Templeton hostname field with URL of the Templeton webservice you are using. In this scenario, fill this field with "192.168.0.131".

  12. Fill the Templeton port field with the port for Templeton hostname. By default, the value for this field is "50111"

  13. Fill the Database field, the Table field, the Username field with the same value you specified in tHCatalogOperation.

  14. Fill the Partition field with "match_age=27".

  15. Fill the File location field with the HDFS location to which the table will be saved. In this example, use "hdfs://192.168.0.131:8020/user/hdp/Customer".

Writing data to the partitioned table using tHCatalogLoad

  1. Double-click tHCatalogLoad to open its Basic settings view.

  2. Fill the Partition field with "match_age=26".

  3. Do the rest of the settings in the same way as configuring tHCatalogOperation.

Reading data from the table in HDFS

  1. Double-click tHCatalogInput to open its Basic settings view.

  2. Click Edit schema to define the schema of the table to be read from the database.

  3. Click [+] to add at least one column to the schema. In this scenario, the columns added to the schema are age and name.

  4. Fill the Partition field with "match_age=26".

  5. Do the rest of the settings in the same way as configuring tHCatalogOperation.

Outputting the data read from the table in HDFS to the console

  1. Double-click tLogRow to open its Basic settings view.

  2. Click Sync columns to retrieve the schema defined in the preceding component.

  3. Select Table from the Mode area.

Job execution

Press CTRL+S to save your Job and F6 to execute it.

The data of the restricted table read from the HDFS is displayed onto the console.

Type in http://talend-hdp:50075/browseDirectory.jsp?dir=/user/hdp/Customer&namenodeInfoPort=50070 to the address bar of your browser to view the table you created:

Click the Customer.csv link to view the content of the table you created.