tHBaseInput - 6.1

Talend Components Reference Guide

EnrichVersion
6.1
EnrichProdName
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for Data Quality
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Real-Time Big Data Platform
task
Data Governance
Data Quality and Preparation
Design and Development
EnrichPlatform
Talend Studio

Warning

This component will be available in the Palette of Talend Studio on the condition that you have subscribed to one of the Talend solutions with Big Data.

Function

tHBaseInput extracts columns corresponding to schema definition. Then it passes these columns to the next component via a Main row link.

Purpose

tHBaseInput reads data from a given HBase database and extracts columns of selection. Hbase is a distributed, column-oriented database that hosts very large, sparsely populated tables on clusters.

If you have subscribed to one of the Talend solutions with Big Data, this component is available in the following types of Jobs:

HBase filters

This table presents the HBase filters available in Talend Studio and the parameters required by those filters.

Filter type

Filter column

Filter familyFilter operationFilter valueFilter comparator typeObjective

Single Column Value Filter

Yes

Yes

Yes

Yes

Yes

It compares the values of a given column against the value defined for the Filter value parameter. If the filtering condition is met, all columns of the row will be returned.

Family filter

 

Yes

Yes

 

Yes

It returns the columns of the family that meets the filtering condition.

Qualifier filter

Yes

 

Yes

 

Yes

It returns the columns whose column qualifiers match the filtering condition.

Column prefix filter

Yes

Yes

   

It returns all columns of which the qualifiers have the prefix defined for the Filter column parameter.

Multiple column prefix filter

Yes (Multiple prefixes are separated by coma, for example, id,id_1,id_2)

Yes

   

It works the same way as a Column prefix filter does but allows specifying multiple prefixes.

Column range filter

Yes (The ends of a range are separated by coma. )

Yes

   It allows intra row scanning and returns all matching columns of a scanned row.

Row filter

  

Yes

Yes

YesIt filters on row keys and returns all rows that matches the filtering condition.

Value filter

  

Yes

Yes

Yes

It returns only columns that have a specific value.

The use explained above of the listed HBase filters is subject to revisions made by Apache in its Apache HBase project; therefore, in order to fully understand how to use these HBase filters, we recommend reading Apache's HBase documentation.

tHBaseInput properties

Component family

Big Data / HBase

 

Basic settings

Property type

Either Built-In or Repository.

Built-In: No property data stored centrally.

Repository: Select the repository file where the properties are stored.

 

Click this icon to open a database connection wizard and store the database connection parameters you set in the component Basic settings view.

For more information about setting up and storing database connection parameters, see Talend Studio User Guide.

 

Use an existing connection

Select this check box and in the Component List click the relevant connection component to reuse the connection details you already defined.

Version

Distribution

Select the cluster you are using from the drop-down list. The options in the list vary depending on the component you are using. Among these options, the following ones requires specific configuration:

  • If available in this Distribution drop-down list, the Microsoft HD Insight option allows you to use a Microsoft HD Insight cluster. For this purpose, you need to configure the connections to the WebHCat service, the HD Insight service and the Windows Azure Storage service of that cluster in the areas that are displayed. A demonstration video about how to configure this connection is available in the following link: https://www.youtube.com/watch?v=A3QTT6VsNoM.

  • The Custom option allows you to connect to a cluster different from any of the distributions given in this list, that is to say, to connect to a cluster not officially supported by Talend.

In order to connect to a custom distribution, once selecting Custom, click the button to display the dialog box in which you can alternatively:

  1. Select Import from existing version to import an officially supported distribution as base and then add other required jar files which the base distribution does not provide.

  2. Select Import from zip to import the configuration zip for the custom distribution to be used. This zip file should contain the libraries of the different Hadoop elements and the index file of these libraries.

    In Talend Exchange, members of Talend community have shared some ready-for-use configuration zip files which you can download from this Hadoop configuration list and directly use them in your connection accordingly. However, because of the ongoing evolution of the different Hadoop-related projects, you might not be able to find the configuration zip corresponding to your distribution from this list; then it is recommended to use the Import from existing version option to take an existing distribution as base to add the jars required by your distribution.

    Note that custom versions are not officially supported by Talend. Talend and its community provide you with the opportunity to connect to custom versions from the Studio but cannot guarantee that the configuration of whichever version you choose will be easy, due to the wide range of different Hadoop distributions and versions that are available. As such, you should only attempt to set up such a connection if you have sufficient Hadoop experience to handle any issues on your own.

    Note

    In this dialog box, the active check box must be kept selected so as to import the jar files pertinent to the connection to be created between the custom distribution and this component.

    For a step-by-step example about how to connect to a custom distribution and share this connection, see Connecting to a custom Hadoop distribution.

 

HBase version

Select the version of the Hadoop distribution you are using. The available options vary depending on the component you are using. Along with the evolution of Hadoop, please note the following changes:

  • If you use Hortonworks Data Platform V2.2, the configuration files of your cluster might be using environment variables such as ${hdp.version}. If this is your situation, you need to set the mapreduce.application.framework.path property in the Hadoop properties table of this component with the path value explicitly pointing to the MapReduce framework archive of your cluster. For example:

    mapreduce.application.framework.path=/hdp/apps/2.2.0.0-2041/mapreduce/mapreduce.tar.gz#mr-framework
  • If you use Hortonworks Data Platform V2.0.0, the type of the operating system for running the distribution and a Talend Job must be the same, such as Windows or Linux. Otherwise, you have to use Talend Jobserver to execute the Job in the same type of operating system in which the Hortonworks Data Platform V2.0.0 distribution you are using is run. For further information about Talend Jobserver, see Talend Installation Guide.

 

Hadoop version of the distribution

This list is displayed only when you have selected Custom from the distribution list to connect to a cluster not yet officially supported by the Studio. In this situation, you need to select the Hadoop version of this custom cluster, that is to say, Hadoop 1 or Hadoop 2.

 

Zookeeper quorum

Type in the name or the URL of the Zookeeper service you use to coordinate the transaction between Talend and HBase. Note that when you configure the Zookeeper, you might need to set the zookeeper.znode.parent property to define the root of the relative path of an HBase's Zookeeper file; then select the Set Zookeeper znode parent check box to define this property.

 

Zookeeper client port

Type in the number of the client listening port of the Zookeeper service you are using.

 

Use kerberos authentication

If you are accessing an HBase database running with Kerberos security, select this check box, then, enter the HBase related principal names in the corresponding fields. You should be able to find the information in the hbase-site.xml file of the cluster to be used.

If you need to use a Kerberos keytab file to log in, select Use a keytab to authenticate. A keytab file contains pairs of Kerberos principals and encrypted keys. You need to enter the principal to be used in the Principal field and the access path to the keytab file itself in the Keytab field.

Note that the user that executes a keytab-enabled Job is not necessarily the one a principal designates but must have the right to read the keytab file being used. For example, the user name you are using to execute a Job is user1 and the principal to be used is guest; in this situation, ensure that user1 has the right to read the keytab file to be used.

 

Schema and Edit schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

 

 

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

 

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

 

Set table Namespace mappings

Select this check box and in the field that is displayed, enter the string to be used to construct the mapping between an Apache HBase table and a MapR table.

For the valid syntax you can use, see http://doc.mapr.com/display/MapR40x/Mapping+Table+Namespace+Between+Apache+HBase+Tables+and+MapR+Tables.

  Table name

Type in the name of the HBase table from which you need to extract columns.

 

Define a row selection

Select this check box and then in the Start row and the End row fields, enter the corresponding row keys to specify the range of the rows you want the current component to extract.

Different from the filters you can set using Is by filter requiring the loading of all records before filtering the ones to be used, this feature allows you to directly select only the rows to be used.

  Mapping

Complete this table to map the columns of the HBase table to be used with the schema columns you have defined for the data flow to be processed.

 

Die on error

Select this check box to stop the execution of the Job when an error occurs.

Clear the check box to skip any rows on error and complete the process for error-free rows. When errors are skipped, you can collect the rows on error using a Row > Reject link.

 Advanced settings

tStatCatcher Statistics

Select this check box to collect log data at the component level. Note that this check box is not available in the Map/Reduce version of the component.

  Properties

If you need to use custom configuration for your HBase, complete this table with the property or properties to be customized. Then at runtime, the customized property or properties will override the corresponding ones used by the Studio for its HBase engine.

For example, you need to define the value of the dfs.replication property as 1 for the HBase configuration. Then you need to add one row to this table using the plus button and type in the name and the value of this property in this row.

Note

This table is not available when you are using an existing connection by selecting the Using an existing connection check box in the Basic settings view.

FilterIs by filter

Select this check box to use HBase filters to perform fine-grained data selection from HBase, such as selection of keys, or values, based on regular expressions.

Once selecting it, the Filter table that is used to define filtering conditions becomes available.

These filters are advanced features provided by HBase and subject to constraints explained in Apache's HBase documentation. Therefore, advanced knowledge of HBase is required to make full use of these filters.

 Logical operation

Select the operator you need to use to define the logical relation between filters. This available operators are:

  • And: every defined filtering conditions must be satisfied. It represents the relationship FilterList.Operator.MUST_PASS_ALL

  • Or: at least one of the defined filtering conditions must be satisfied. It represents the relationship: FilterList.Operator.MUST_PASS_ONE

 Filter

Click the button under this table to add as many rows as required, each row representing a filter. The parameters you may need to set for a filter are:

  • Filter type: the drop-down list presents pre-existing filter types that are already defined by HBase. Select the type of the filter you need to use.

  • Filter column: enter the column qualifier on which you need to apply the active filter. This parameter becomes mandatory depending on the type of the filter and of the comparator you are using. For example, it is not used by the Row Filter type but is required by the Single Column Value Filter type.

  • Filter family: enter the column family on which you need to apply the active filter. This parameter becomes mandatory depending on the type of the filter and of the comparator you are using. For example, it is not used by the Row Filter type but is required by the Single Column Value Filter type.

  • Filter operation: select from the drop-down list the operation to be used for the active filter.

  • Filter Value: enter the value on which you want to use the operator selected from the Filter operation drop-down list.

  • Filter comparator type: select the type of the comparator to be combined with the filter you are using.

Depending on the Filter type you are using, some or each of the parameters become mandatory. For further information, see HBase filters

Global Variables

NB_LINE: the number of rows read by an input component or transferred to an output component. This is an After variable and it returns an integer.

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.

Usage

This component is a start component of a Job and always needs an output link.

Prerequisites

Before starting, ensure that you have met the Loopback IP prerequisites expected by HBase. For further information, see Apache's HBase documentation on http://hbase.apache.org/.

The Hadoop distribution must be properly installed, so as to guarantee the interaction with Talend Studio. The following list presents MapR related information for example.

  • Ensure that you have installed the MapR client in the machine where the Studio is, and added the MapR client library to the PATH variable of that machine. According to MapR's documentation, the library or libraries of a MapR client corresponding to each OS version can be found under MAPR_INSTALL\ hadoop\hadoop-VERSION\lib\native. For example, the library for Windows is \lib\native\MapRClient.dll in the MapR client jar file. For further information, see the following link from MapR: http://www.mapr.com/blog/basic-notes-on-configuring-eclipse-as-a-hadoop-development-environment-for-mapr.

    Without adding the specified library or libraries, you may encounter the following error: no MapRClient in java.library.path.

  • Set the -Djava.library.path argument, for example, in the Job Run VM arguments area of the Run/Debug view in the [Preferences] dialog box. This argument provides to the Studio the path to the native library of that MapR client. This allows the subscription-based users to make full use of the Data viewer to view locally in the Studio the data stored in MapR. For further information about how to set this argument, see the section describing how to view data of Talend Big Data Getting Started Guide.

For further information about how to install a Hadoop distribution, see the manuals corresponding to the Hadoop distribution you are using.

Log4j

If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Scenario: Exchanging customer data with HBase

In this scenario, a six-component Job is used to exchange customer data with a given HBase.

The six components are:

  • tHBaseConnection: creates a connection to your HBase database.

  • tFixedFlowInput: creates the data to be written into your HBase. In the real use case, this component could be replaced by the other input components like tFileInputDelimited.

  • tHBaseOutput: writes the data it receives from the preceding component into your HBase.

  • tHBaseInput: extracts the columns of interest from your HBase.

  • tLogRow: presents the execution result.

  • tHBaseClose: closes the transaction.

To replicate this scenario, proceed as the following sections illustrate.

Note

Before starting the replication, your Hbase and Zookeeper service should have been correctly installed and well configured. This scenario explains only how to use Talend solution to make data transaction with a given HBase.

Dropping and linking the components

To do this, proceed as follows:

  1. Drop tHBaseConnection, tFixedFlowInput, tHBaseOutput, tHBaseInput, tLogRow and tHBaseClose from Palette onto the Design workspace.

  2. Right-click tHBaseConnection to open its contextual menu and select the Trigger > On Subjob Ok link from this menu to connect this component to tFixedFlowInput.

  3. Do the same to create the OnSubjobOk link from tFixedFlowInput to tHBaseInput and then to tHBaseClose.

  4. Right-click tFixedFlowInput and select the Row > Main link to connect this component to tHBaseOutput.

  5. Do the same to create the Main link from tHBaseInput to tLogrow.

The components to be used in this scenario are all placed and linked. Then you need continue to configure them sucessively.

Configuring the connection

To configure the connection to your Zookeeper service and thus to the HBase of interest, proceed as follows:

  1. On the Design workspace of your Studio, double-click the tHBaseConnection component to open its Component view.

  2. Select Hortonworks Data Platform 1.0 from the HBase version list.

  3. In the Zookeeper quorum field, type in the name or the URL of the Zookeeper service you are using. In this example, the name of the service in use is hbase.

  4. In the Zookeeper client port field, type in the number of client listening port. In this example, it is 2181.

  5. If the Zookeeper znode parent location has been defined in the Hadoop cluster you are connecting to, you need to select the Set zookeeper znode parent check box and enter the value of this property in the field that is displayed.

Configuring the process of writing data into the HBase

To do this, proceed as follows:

  1. On the Design workspace, double-click the tFixedFlowInput component to open its Component view.

  2. In this view, click the three-dot button next to Edit schema to open the schema editor.

  3. Click the plus button three times to add three rows and in the Column column, rename the three rows respectively as: id, name and age.

  4. In the Type column, click each of these rows and from the drop-down list, select the data type of every row. In this scenario, they are Integer for id and age, String for name.

  5. Click OK to validate these changes and accept the propagation prompted by the pop-up dialog box.

  6. In the Mode area, select the Use Inline Content (delimited file) to display the fields for editing.

  7. In the Content field, type in the delimited data to be written into the HBase, separated with the semicolon ";". In this example, they are:

    1;Albert;23
    2;Alexandre;24
    3;Alfred-Hubert;22
    4;Andre;40
    5;Didier;28
    6;Anthony;35
    7;Artus;32
    8;Catherine;34
    9;Charles;21
    10;Christophe;36
    11;Christian;67
    12;Danniel;54
    13;Elisabeth;58
    14;Emile;32
    15;Gregory;30 
  8. Double-click tHBaseOutput to open its Component view.

    Note

    If this component does not have the same schema of the preceding component, a warning icon appears. In this case, click the Sync columns button to retrieve the schema from the preceding one and once done, the warning icon disappears.

  9. Select the Use an existing connection check box and then select the connection you have configured earlier. In this example, it is tHBaseConnection_1.

  10. In the Table name field, type in the name of the table to be created in the HBase. In this example, it is customer.

  11. In the Action on table field, select the action of interest from the drop-down list. In this scenario, select Drop table if exists and create. This way, if a table named customer exists already in the HBase, it will be disabled and deleted before creating this current table.

  12. Click the Advanced settings tab to open the corresponding view.

  13. In the Family parameters table, add two rows by clicking the plus button, rename them as family1 and family2 respectively and then leave the other columns empty. These two column families will be created in the HBase using the default family performance options.

    Note

    The Family parameters table is available only when the action you have selected in the Action on table field is to create a table in HBase. For further information about this Family parameters table, see tHBaseOutput.

  14. In the Families table of the Basic settings view, enter the family names in the Family name column, each corresponding to the column this family contains. In this example, the id and the age columns belong to family1 and the name column to family2.

    Note

    These column families should already exist in the HBase to be connected to; if not, you need to define them in the Family parameters table of the Advanced settings view for creating them at runtime.

Configuring the process of extracting data from the HBase

To do this, perform the following operations:

  1. Double-click tHBaseInput to open its Component view.

  2. Select the Use an existing connection check box and then select the connection you have configured earlier. In this example, it is tHBaseConnection_1.

  3. Click the three-dot button next to Edit schema to open the schema editor.

  4. Click the plus button three times to add three rows and rename them as id, name and age respectively in the Column column. This means that you extract these three columns from the HBase.

  5. Select the types for each of the three columns. In this example, Integer for id and age, String for name.

  6. Click OK to validate these changes and accept the propagation prompted by the pop-up dialog box.

  7. In the Table name field, type in the table from which you extract the columns of interest. In this scenario, the table is customer.

  8. In the Mapping table, the Column column has been already filled automatically since the schema was defined, so simply enter the name of every family in the Column family column, each corresponding to the column it contains.

  9. Double-click tHBaseClose to open its Component view.

  10. In the Component List field, select the connection you need to close. In this example, this connection is tHBaseConnection_1.

Executing the Job

To execute this Job, press F6.

Once done, the Run view is opened automatically, where you can check the execution result.

These columns of interest are extracted and you can process them according to your needs.

Login to your HBase database, you can check the customer table this Job has created.

tHBaseInput in Talend Map/Reduce Jobs

Warning

The information in this section is only for users that have subscribed to one of the Talend solutions with Big Data and is not applicable to Talend Open Studio for Big Data users.

In a Talend Map/Reduce Job, tHBaseInput, as well as the whole Map/Reduce Job using it, generates native Map/Reduce code. This section presents the specific properties of tHBaseInput when it is used in that situation. For further information about a Talend Map/Reduce Job, see the Talend Big Data Getting Started Guide.

Component family

Databases/HBase

 

Basic settings

Property type

Either Built-In or Repository.

Built-In: No property data stored centrally.

Repository: Select the repository file where the properties are stored.

 

Click this icon to open a database connection wizard and store the database connection parameters you set in the component Basic settings view.

For more information about setting up and storing database connection parameters, see Talend Studio User Guide.

Version

Distribution

Select the cluster you are using from the drop-down list. The options in the list vary depending on the component you are using. Among these options, the following ones requires specific configuration:

  • If available in this Distribution drop-down list, the Microsoft HD Insight option allows you to use a Microsoft HD Insight cluster. For this purpose, you need to configure the connections to the WebHCat service, the HD Insight service and the Windows Azure Storage service of that cluster in the areas that are displayed. A demonstration video about how to configure this connection is available in the following link: https://www.youtube.com/watch?v=A3QTT6VsNoM.

  • The Custom option allows you to connect to a cluster different from any of the distributions given in this list, that is to say, to connect to a cluster not officially supported by Talend.

In order to connect to a custom distribution, once selecting Custom, click the button to display the dialog box in which you can alternatively:

  1. Select Import from existing version to import an officially supported distribution as base and then add other required jar files which the base distribution does not provide.

  2. Select Import from zip to import the configuration zip for the custom distribution to be used. This zip file should contain the libraries of the different Hadoop elements and the index file of these libraries.

    In Talend Exchange, members of Talend community have shared some ready-for-use configuration zip files which you can download from this Hadoop configuration list and directly use them in your connection accordingly. However, because of the ongoing evolution of the different Hadoop-related projects, you might not be able to find the configuration zip corresponding to your distribution from this list; then it is recommended to use the Import from existing version option to take an existing distribution as base to add the jars required by your distribution.

    Note that custom versions are not officially supported by Talend. Talend and its community provide you with the opportunity to connect to custom versions from the Studio but cannot guarantee that the configuration of whichever version you choose will be easy, due to the wide range of different Hadoop distributions and versions that are available. As such, you should only attempt to set up such a connection if you have sufficient Hadoop experience to handle any issues on your own.

    Note

    In this dialog box, the active check box must be kept selected so as to import the jar files pertinent to the connection to be created between the custom distribution and this component.

    For a step-by-step example about how to connect to a custom distribution and share this connection, see Connecting to a custom Hadoop distribution.

In the Map/Reduce version of this component, the distribution you select must be the same as the one you need to define in the Hadoop configuration view for the whole Job.

 HBase version

Select the version of the Hadoop distribution you are using. The available options vary depending on the component you are using. Along with the evolution of Hadoop, please note the following changes:

  • If you use Hortonworks Data Platform V2.2, the configuration files of your cluster might be using environment variables such as ${hdp.version}. If this is your situation, you need to set the mapreduce.application.framework.path property in the Hadoop properties table of this component with the path value explicitly pointing to the MapReduce framework archive of your cluster. For example:

    mapreduce.application.framework.path=/hdp/apps/2.2.0.0-2041/mapreduce/mapreduce.tar.gz#mr-framework
  • If you use Hortonworks Data Platform V2.0.0, the type of the operating system for running the distribution and a Talend Job must be the same, such as Windows or Linux. Otherwise, you have to use Talend Jobserver to execute the Job in the same type of operating system in which the Hortonworks Data Platform V2.0.0 distribution you are using is run. For further information about Talend Jobserver, see Talend Installation Guide.

 

Hadoop version of the distribution

This list is displayed only when you have selected Custom from the distribution list to connect to a cluster not yet officially supported by the Studio. In this situation, you need to select the Hadoop version of this custom cluster, that is to say, Hadoop 1 or Hadoop 2.

 

Zookeeper quorum

Type in the name or the URL of the Zookeeper service you use to coordinate the transaction between Talend and HBase. Note that when you configure the Zookeeper, you might need to set the zookeeper.znode.parent property to define the root of the relative path of an HBase's Zookeeper file; then select the Set Zookeeper znode parent check box to define this property.

 

Zookeeper client port

Type in the number of the client listening port of the Zookeeper service you are using.

 

Use kerberos authentication

If you are accessing an HBase database running with Kerberos security, select this check box, then, enter the HBase related principal names in the corresponding fields. You should be able to find the information in the hbase-site.xml file of the cluster to be used.

If you need to use a Kerberos keytab file to log in, select Use a keytab to authenticate. A keytab file contains pairs of Kerberos principals and encrypted keys. You need to enter the principal to be used in the Principal field and the access path to the keytab file itself in the Keytab field.

Note that the user that executes a keytab-enabled Job is not necessarily the one a principal designates but must have the right to read the keytab file being used. For example, the user name you are using to execute a Job is user1 and the principal to be used is guest; in this situation, ensure that user1 has the right to read the keytab file to be used.

 

Schema et Edit schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

 

 

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

 

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

 

Set table Namespace mappings

Select this check box and in the field that is displayed, enter the string to be used to construct the mapping between an Apache HBase table and a MapR table.

For the valid syntax you can use, see http://doc.mapr.com/display/MapR40x/Mapping+Table+Namespace+Between+Apache+HBase+Tables+and+MapR+Tables.

  Table name

Type in the name of the HBase table from which you need to extract columns.

 

Define a row selection

Select this check box and then in the Start row and the End row fields, enter the corresponding row keys to specify the range of the rows you want the current component to extract.

Different from the filters you can set using Is by filter requiring the loading of all records before filtering the ones to be used, this feature allows you to directly select only the rows to be used.

 Mapping

Complete this table to map the columns of the HBase table to be used with the schema columns you have defined for the data flow to be processed.

 

Die on error

Select this check box to stop the execution of the Job when an error occurs.

Clear the check box to skip any rows on error and complete the process for error-free rows. When errors are skipped, you can collect the rows on error using a Row > Reject link.

Advanced settings

Properties

If you need to use custom configuration for your HBase, complete this table with the property or properties to be customized. Then at runtime, the customized property or properties will override the corresponding ones used by the Studio for its HBase engine.

For example, you need to define the value of the dfs.replication property as 1 for the HBase configuration. Then you need to add one row to this table using the plus button and type in the name and the value of this property in this row.

FilterIs by filter

Select this check box to use HBase filters to perform fine-grained data selection from HBase, such as selection of keys, or values, based on regular expressions.

Once selecting it, the Filter table that is used to define filtering conditions becomes available.

These filters are advanced features provided by HBase and subject to constraints explained in Apache's HBase documentation. Therefore, advanced knowledge of HBase is required to make full use of these filters.

 Logical operation

Select the operator you need to use to define the logical relation between filters. This available operators are:

  • And: every defined filtering conditions must be satisfied. It represents the relationship FilterList.Operator.MUST_PASS_ALL

  • Or: at least one of the defined filtering conditions must be satisfied. It represents the relationship: FilterList.Operator.MUST_PASS_ONE

 Filter

Click the button under this table to add as many rows as required, each row representing a filter. The parameters you may need to set for a filter are:

  • Filter type: the drop-down list presents pre-existing filter types that are already defined by HBase. Select the type of the filter you need to use.

  • Filter column: enter the column qualifier on which you need to apply the active filter. This parameter becomes mandatory depending on the type of the filter and of the comparator you are using. For example, it is not used by the Row Filter type but is required by the Single Column Value Filter type.

  • Filter family: enter the column family on which you need to apply the active filter. This parameter becomes mandatory depending on the type of the filter and of the comparator you are using. For example, it is not used by the Row Filter type but is required by the Single Column Value Filter type.

  • Filter operation: select from the drop-down list the operation to be used for the active filter.

  • Filter Value: enter the value on which you want to use the operator selected from the Filter operation drop-down list.

  • Filter comparator type: select the type of the comparator to be combined with the filter you are using.

Depending on the Filter type you are using, some or each of the parameters become mandatory. For further information, see HBase filters

Global Variables

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.

Usage in Map/Reduce Jobs

In a Talend Map/Reduce Job, it is used as a start component and requires a transformation component as output link. The other components used along with it must be Map/Reduce components, too. They generate native Map/Reduce code that can be executed directly in Hadoop.

You need to use the Hadoop Configuration tab in the Run view to define the connection to a given Hadoop distribution for the whole Job.

The Hadoop configuration you use for the whole Job and the Hadoop distribution you use for the HBase components must be the same. Actually, an HBase component requires that its Hadoop distribution parameter be defined separately so as to launch its HBase driver only when that component is used.

For further information about a Talend Map/Reduce Job, see the sections describing how to create, convert and configure a Talend Map/Reduce Job of the Talend Big Data Getting Started Guide.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs, and non Map/Reduce Jobs.

Prerequisites

Before starting, ensure that you have met the Loopback IP prerequisites expected by HBase. For further information, see Apache's HBase documentation on http://hbase.apache.org/.

The Hadoop distribution must be properly installed, so as to guarantee the interaction with Talend Studio. The following list presents MapR related information for example.

  • Ensure that you have installed the MapR client in the machine where the Studio is, and added the MapR client library to the PATH variable of that machine. According to MapR's documentation, the library or libraries of a MapR client corresponding to each OS version can be found under MAPR_INSTALL\ hadoop\hadoop-VERSION\lib\native. For example, the library for Windows is \lib\native\MapRClient.dll in the MapR client jar file. For further information, see the following link from MapR: http://www.mapr.com/blog/basic-notes-on-configuring-eclipse-as-a-hadoop-development-environment-for-mapr.

    Without adding the specified library or libraries, you may encounter the following error: no MapRClient in java.library.path.

  • Set the -Djava.library.path argument, for example, in the Job Run VM arguments area of the Run/Debug view in the [Preferences] dialog box. This argument provides to the Studio the path to the native library of that MapR client. This allows the subscription-based users to make full use of the Data viewer to view locally in the Studio the data stored in MapR. For further information about how to set this argument, see the section describing how to view data of Talend Big Data Getting Started Guide.

For further information about how to install a Hadoop distribution, see the manuals corresponding to the Hadoop distribution you are using.

Hadoop Connection

You need to use the Hadoop Configuration tab in the Run view to define the connection to a given Hadoop distribution for the whole Job.

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Map/Reduce version of this component yet.

tHBaseInput properties in Spark Batch Jobs

Component family

Databases/HBase

 

Basic settings

Storage configuration

Select the tHBaseConfiguration component from which the Spark system to be used reads the configuration information to connect to HBase.

Property type

Either Built-In or Repository.

Built-In: No property data stored centrally.

Repository: Select the repository file where the properties are stored.

 

Click this icon to open a database connection wizard and store the database connection parameters you set in the component Basic settings view.

For more information about setting up and storing database connection parameters, see Talend Studio User Guide.

 

Schema et Edit schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

 

 

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

 

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

 

Table name

Type in the name of the HBase table from which you need to extract columns.

 

Set table Namespace mappings

Select this check box and in the field that is displayed, enter the string to be used to construct the mapping between an Apache HBase table and a MapR table.

For the valid syntax you can use, see http://doc.mapr.com/display/MapR40x/Mapping+Table+Namespace+Between+Apache+HBase+Tables+and+MapR+Tables.

 

Define a row selection

Select this check box and then in the Start row and the End row fields, enter the corresponding row keys to specify the range of the rows you want the current component to extract.

Different from the filters you can set using Is by filter requiring the loading of all records before filtering the ones to be used, this feature allows you to directly select only the rows to be used.

 Mapping

Complete this table to map the columns of the HBase table to be used with the schema columns you have defined for the data flow to be processed.

FilterIs by filter

Select this check box to use HBase filters to perform fine-grained data selection from HBase, such as selection of keys, or values, based on regular expressions.

Once selecting it, the Filter table that is used to define filtering conditions becomes available.

These filters are advanced features provided by HBase and subject to constraints explained in Apache's HBase documentation. Therefore, advanced knowledge of HBase is required to make full use of these filters.

 Logical operation

Select the operator you need to use to define the logical relation between filters. This available operators are:

  • And: every defined filtering conditions must be satisfied. It represents the relationship FilterList.Operator.MUST_PASS_ALL

  • Or: at least one of the defined filtering conditions must be satisfied. It represents the relationship: FilterList.Operator.MUST_PASS_ONE

 Filter

Click the button under this table to add as many rows as required, each row representing a filter. The parameters you may need to set for a filter are:

  • Filter type: the drop-down list presents pre-existing filter types that are already defined by HBase. Select the type of the filter you need to use.

  • Filter column: enter the column qualifier on which you need to apply the active filter. This parameter becomes mandatory depending on the type of the filter and of the comparator you are using. For example, it is not used by the Row Filter type but is required by the Single Column Value Filter type.

  • Filter family: enter the column family on which you need to apply the active filter. This parameter becomes mandatory depending on the type of the filter and of the comparator you are using. For example, it is not used by the Row Filter type but is required by the Single Column Value Filter type.

  • Filter operation: select from the drop-down list the operation to be used for the active filter.

  • Filter Value: enter the value on which you want to use the operator selected from the Filter operation drop-down list.

  • Filter comparator type: select the type of the comparator to be combined with the filter you are using.

Depending on the Filter type you are using, some or each of the parameters become mandatory. For further information, see HBase filters

 

Die on HBase error

Select this check box to stop the execution of the Job when an error occurs.

Clear the check box to skip any rows on error and complete the process for error-free rows. When errors are skipped, you can collect the rows on error using a Row > Reject link.

Usage in Spark Batch Jobs

In a Talend Spark Batch Job, it is used as a start component and requires an output link. The other components used along with it must be Spark Batch components, too. They generate native Spark code that can be executed directly in a Spark cluster.

This component should use a tHBaseConfiguration component present in the same Job to connect to HBase.

This component, along with the Spark Batch component Palette it belongs to, appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs.

Spark Connection

You need to use the Spark Configuration tab in the Run view to define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, one and only one file system related component from the Storage family is required in the same Job so that Spark can use this component to connect to the file system to which the jar files dependent on the Job are transferred:

This connection is effective on a per-Job basis.

Log4j

If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Related scenarios

No scenario is available for the Spark Batch version of this component yet.