Scenario: Loading an HBase table - 6.1

Talend Components Reference Guide

EnrichVersion
6.1
EnrichProdName
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for Data Quality
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Real-Time Big Data Platform
task
Data Governance
Data Quality and Preparation
Design and Development
EnrichPlatform
Talend Studio

This scenario uses tPigLoad and tPigStoreResult to read data from HBase and to write them to HDFS.

The HBase table to be used has three columns: id, name and age, among which id and age belong to the column family, family1 and name to the column family, family2.

The data stored in that HBase table are as follows:

1;Albert;23
2;Alexandre;24
3;Alfred-Hubert;22
4;Andre;40
5;Didier;28
6;Anthony;35
7;Artus;32
8;Catherine;34
9;Charles;21
10;Christophe;36
11;Christian;67
12;Danniel;54
13;Elisabeth;58
14;Emile;32
15;Gregory;30 

To replicate this scenario, perform the following operations:

Linking the components

  1. In the Integration perspective of Talend Studio, create an empty Job, named hbase_storage for example, from the Job Designs node in the Repository tree view.

    For further information about how to create a Job, see the Talend Studio User Guide.

  2. Drop tPigLoad and tPigStoreResult onto the workspace.

  3. Connect them using the Row > Pig combine link.

Configuring tPigLoad

  1. Double-click tPigLoad to open its Component view.

  2. Click the button next to Edit schema to open the schema editor.

  3. Click the button four times to add four rows and rename them: rowkey, id, name and age. The rowkey column put at the top of the schema to store the HBase row key column, but in practice, if you do not need to load the row key column, you can create only the other three columns in your schema.

  4. Click OK to validate these changes and accept the propagation prompted by the pop-up dialog box.

  5. In the Mode area, select Map/Reduce, as we are using a remote Hadoop distribution.

  6. In the Distribution and the Version fields, select the Hadoop distribution you are using. In this example, we are using HortonWorks Data Platform V1.

  7. In the Load function field, select HBaseStorage. Then, the corresponding parameters to set appear.

  8. In the NameNode URI and the JobTracker host fields, enter the locations of those services, respectively.

  9. In the Zookeeper quorum and the Zookeeper client port fields, enter the location information of the Zookeeper service to be used.

  10. If the Zookeeper znode parent location has been defined in the Hadoop cluster you are connecting to, you need to select the Set zookeeper znode parent check box and enter the value of this property in the field that is displayed.

  11. In the Table name field, enter the name of the table from which tPigLoad reads the data.

  12. Select the Load key check box if you need to load the HBase row key column. In this example, we select it.

  13. In the Mapping table, four rows have been added automatically. In the Column family:qualifier column, enter the HBase columns you need to map with the schema columns you defined. In this scenario, we put family1:id for the id column, family2:name for the name column and family1:age for the age column.

Configuring tPigStoreResult

  1. Double-click tPigStoreResult to open its Component view.

  2. In the Result file field, enter the directory where you need to store the result. As tPigStoreResult reuses automatically the connection created by tPigLoad, the path in this scenario is the directory in the machine hosting the Hadoop distribution to be used.

  3. Select Remove result directory if exists.

  4. In the Store function field, select PigStorage to store the result in the UTF-8 format.

Executing the Job

Then you can press F6 run this Job.

Once done, you can verify the result in the HDFS system used.

If you need to obtain more details about the Job, it is recommended to use the web console of the Jobtracker provided by the Hadoop distribution you are using.

In JobHistory, you can easily find the execution status of your Pig Job because the name of the Job is automatically created by concatenating the name of the project that contains the Job, the name and version of the Job itself and the label of the first tPigLoad component used in it. The naming convention of a Pig Job in JobHistory is ProjectName_JobNameVersion_FirstComponentName.