tGlobalVarLoad - 6.1

Talend Components Reference Guide

EnrichVersion
6.1
EnrichProdName
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for Data Quality
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Real-Time Big Data Platform
task
Data Governance
Data Quality and Preparation
Design and Development
EnrichPlatform
Talend Studio

Function

tGlobalVarLoad defines variables using the columns of its input schema and stores the incoming data in these variables.

Purpose

tGlobalVarLoad sets variables using the incoming data so that the data can be dynamically reused by other Subjobs.

If you have subscribed to one of the Talend solutions with Big Data, this component is available in the following types of Jobs:

tGlobalVarLoad in Talend Map/Reduce Jobs

Warning

The information in this section is only for users that have subscribed to one of the Talend solutions with Big Data and is not applicable to Talend Open Studio for Big Data users.

In a Talend Map/Reduce Job, tGlobalVarLoad, as well as the other Map/Reduce components preceding it, generates native Map/Reduce code. This section presents the specific properties of tGlobalVarLoad when it is used in that situation. For further information about a Talend Map/Reduce Job, see Talend Big Data Getting Started Guide.

Component family

MapReduce/Output

 

Basic settings

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

The columns of the schema are set to be variable keys and the data in these columns are the variable values.

  

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

  

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

  

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

Global Variables

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.

Usage

This component is placed at the end of a process. It generates variables that the other Subjobs within the same Job can reuse by calling the globalMap.get() method.

Limitation

n/a

Scenario: selecting the salary records above the average using a Map/Reduce Job

In this scenario, a six-component Job is created to calculate the average salary of a set of sample data and select the salaries above the average.

The sample data to be used is already stored in the HDFS system to be used and read as follows:

1	Lyndon	1200	
2	Ronald	3500	
3	Ulysses	5000	
4	Harry	2000	
5	Garfield	1800	
6	James	3300	
7	Chester	4200	
8	Dwight	2200	
9	Jimmy	2800	
10	Herbert	3500

You can read that the separator between the fields is /t and the three columns of the sample data are id, name and salary.

You can use the tHDFSOutput component to write the sample data in the HDFS system to be used. For further information, see tHDFSOutput.

Linking the components

  1. In the Integration perspective of the Studio, create an empty Map/Reduce Job from the Job Designs node in the Repository tree view.

    For further information about how to create a Map/Reduce Job, see Talend Big Data Getting Started Guide.

  2. In the workspace, enter the name of the component to be used and select this component from the list that appears. In this scenario, the components are tAggregateRow, tGlobalVarLoad, tMap, tLogRow and two tHDFSInput (labelled customer in this scenario) components.

  3. Connect one of the tHDFSInput components to tAggregateRow using the Row > Main link and then do the same to link tAggregateRow to tGlobalVarLoad.

    This subjob is used to calculate the average salary and set this average into a reusable variable.

  4. Connect the same tHDFSInput component to the other tHDFSInput component using the Trigger > On Subjob Ok link.

  5. Connect this second tHDFSInput component to tMap using the Row > Main link, then do the same to connect tMap to tLogRow and in the popup dialog box, give this link a name you want to use.

    This subjob is used to select the salaries above the average.

Setting up Hadoop connection

  1. Click Run to open its view and then click the Hadoop Configuration tab to display its view for configuring the Hadoop connection for this Job.

    This view looks like the image below:

  2. From the Property type list, select Built-in. If you have created the connection to be used in Repository, then select Repository and thus the Studio will reuse that set of connection information for this Job.

    For further information about how to create an Hadoop connection in Repository, see the chapter describing the Hadoop cluster node of Talend Studio User Guide.

  3. In the Version area, select the Hadoop distribution to be used and its version. If you cannot find from the list the distribution corresponding to yours, select Custom so as to connect to a Hadoop distribution not officially supported in the Studio.

    For a step-by-step example about how to use this Custom option, see Connecting to a custom Hadoop distribution.

    Along with the evolution of Hadoop, please note the following changes:

    • If you use Hortonworks Data Platform V2.2, the configuration files of your cluster might be using environment variables such as ${hdp.version}. If this is your situation, you need to set the mapreduce.application.framework.path property in the Hadoop properties table with the path value explicitly pointing to the MapReduce framework archive of your cluster. For example:

      mapreduce.application.framework.path=/hdp/apps/2.2.0.0-2041/mapreduce/mapreduce.tar.gz#mr-framework
    • If you use Hortonworks Data Platform V2.0.0, the type of the operating system for running the distribution and a Talend Job must be the same, such as Windows or Linux. Otherwise, you have to use Talend Jobserver to execute the Job in the same type of operating system in which the Hortonworks Data Platform V2.0.0 distribution you are using is run. For further information about Talend Jobserver, see Talend Installation Guide.

  4. In the Name node field, enter the location of the master node, the NameNode, of the distribution to be used. For example, hdfs://tal-qa113.talend.lan:8020.

    If you are using a MapR distribution, you can simply leave maprfs:/// as it is in this field; then the MapR client will take care of the rest on the fly for creating the connection. The MapR client must be properly installed. For further information about how to set up a MapR client, see the following link in MapR's documentation: http://doc.mapr.com/display/MapR/Setting+Up+the+Client

  5. In the Job tracker field, enter the location of the JobTracker of your distribution. For example, tal-qa114.talend.lan:8050.

    Note that the notion Job in this term JobTracker designates the MR or the MapReduce jobs described in Apache's documentation on http://hadoop.apache.org/.

    If you use YARN in your Hadoop cluster such as Hortonworks Data Platform V2.0.0 or Cloudera CDH4.3 + (YARN mode), you need to specify the location of the Resource Manager instead of the Jobtracker. Then you can continue to set the following parameters depending on the configuration of the Hadoop cluster to be used (if you leave the check box of a parameter clear, then at runtime, the configuration about this parameter in the Hadoop cluster to be used will be ignored ):

    • Select the Set resourcemanager scheduler address check box and enter the Scheduler address in the field that appears.

    • Select the Set jobhistory address check box and enter the location of the JobHistory server of the Hadoop cluster to be used. This allows the metrics information of the current Job to be stored in that JobHistory server.

    • Select the Set staging directory check box and enter this directory defined in your Hadoop cluster for temporary files created by running programs. Typically, this directory can be found under the yarn.app.mapreduce.am.staging-dir property in the configuration files such as yarn-site.xml or mapred-site.xml of your distribution.

    • Select the Use datanode hostname check box to allow the Job to access datanodes via their hostnames. This actually sets the dfs.client.use.datanode.hostname property to true. When connecting to a S3N filesystem, you must select this check box.

  6. If you are accessing the Hadoop cluster running with Kerberos security, select this check box, then, enter the Kerberos principal name for the NameNode in the field displayed. This enables you to use your user name to authenticate against the credentials stored in Kerberos.

    In addition, since this component performs Map/Reduce computations, you also need to authenticate the related services such as the Job history server and the Resource manager or Jobtracker depending on your distribution in the corresponding field. These principals can be found in the configuration files of your distribution. For example, in a CDH4 distribution, the Resource manager principal is set in the yarn-site.xml file and the Job history principal in the mapred-site.xml file.

    If you need to use a Kerberos keytab file to log in, select Use a keytab to authenticate. A keytab file contains pairs of Kerberos principals and encrypted keys. You need to enter the principal to be used in the Principal field and the access path to the keytab file itself in the Keytab field.

    Note that the user that executes a keytab-enabled Job is not necessarily the one a principal designates but must have the right to read the keytab file being used. For example, the user name you are using to execute a Job is user1 and the principal to be used is guest; in this situation, ensure that user1 has the right to read the keytab file to be used.

  7. In the User name field, enter the login user name for your distribution. If you leave it empty, the user name of the machine hosting the Studio will be used.

  8. In the Temp folder field, enter the path in HDFS to the folder where you store the temporary files generated during Map/Reduce computations.

  9. Leave the default value of the Path separator in server as it is, unless you have changed the separator used by your Hadoop distribution's host machine for its PATH variable or in other words, that separator is not a colon (:). In that situation, you must change this value to the one you are using in that host.

  10. Leave the Clear temporary folder check box selected, unless you want to keep those temporary files.

  11. Leave the Compress intermediate map output to reduce network traffic check box selected, so as to spend shorter time to transfer the mapper task partitions to the multiple reducers.

    However, if the data transfer in the Job is negligible, it is recommended to clear this check box to deactivate the compression step, because this compression consumes extra CPU resources.

  12. If you need to use custom Hadoop properties, complete the Hadoop properties table with the property or properties to be customized. Then at runtime, these changes will override the corresponding default properties used by the Studio for its Hadoop engine.

    For further information about the properties required by Hadoop, see Apache's Hadoop documentation on http://hadoop.apache.org, or the documentation of the Hadoop distribution you need to use.

  13. If the Hadoop distribution to be used is Hortonworks Data Platform V1.2 or Hortonworks Data Platform V1.3, you need to set proper memory allocations for the map and reduce computations to be performed by the Hadoop system.

    In that situation, you need to enter the values you need in the Mapred job map memory mb and the Mapred job reduce memory mb fields, respectively. By default, the values are both 1000 which are normally appropriate for running the computations.

    If the distribution is YARN, then the memory parameters to be set become Map (in Mb), Reduce (in Mb) and ApplicationMaster (in Mb), accordingly. These fields allow you to dynamically allocate memory to the map and the reduce computations and the ApplicationMaster of YARN.

  14. If you are using Cloudera V5.5+, you can select the Use Cloudera Navigator check box to enable the Cloudera Navigator of your distribution to trace your Job lineage to the component level, including the schema changes between components.

    With this option activated, you need to set the following parameters:

    • Username and Password: this is the credentials you use to connect to your Cloudera Navigator.

    • Cloudera Navigator URL : enter the location of the Cloudera Navigator to be connected to.

    • Cloudera Navigator Metadata URL: enter the location of the Navigator Metadata.

    • Activate the autocommit option: select this check box to make Cloudera Navigator generate the lineage of the current Job at the end of the execution of this Job.

      Since this option actually forces Cloudera Navigator to generate lineages of all its available entities such as HDFS files and directories, Hive queries or Pig scripts, it is not recommended for the production environment because it will slow the Job.

    • Kill the job if Cloudera Navigator fails: select this check box to stop the execution of the Job when the connection to your Cloudera Navigator fails.

      Otherwise, leave it clear to allow your Job to continue to run.

    • Disable SSL validation: select this check box to make your Job to connect to Cloudera Navigator without the SSL validation process.

      This feature is meant to facilitate the test of your Job but is not recommended to be used in a production cluster.

For further information about this Hadoop Configuration tab, see the section describing how to configure the Hadoop connection for a Talend Map/Reduce Job of the Talend Big Data Getting Started Guide.

For further information about the Resource Manager, its scheduler and the ApplicationMaster, see YARN's documentation such as http://hortonworks.com/blog/apache-hadoop-yarn-concepts-and-applications/.

For further information about how to determine YARN and MapReduce memory configuration settings, see the documentation of the distribution you are using, such as the following link provided by Hortonworks: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.6.0/bk_installing_manually_book/content/rpm-chap1-11.html.

Reading the sample data into the Job

  1. Double-click either of the two tHDFSInput components to display its Basic settings view.

    Since these two tHDFSInput components are used to read the same source data and are configured the same way. You need to configure both of them using the procedure explained in this section.

  2. Click the [...] button next to Edit schema to open the schema editor.

  3. Click the [+] button three times to add three rows and in the Column column, rename them to id, name and salary, respectively.

  4. In the Type column of the salary row, select Double.

  5. Click OK to validate these changes and accept the propagation prompted by the pop-up dialog box.

  6. In the Folder/File field, browse to the sample data to be processed in the HDFS system.

  7. In the File type area, select Text file from the Type list.

  8. In the Field separator field, enter \t.

Calculating the average

  1. Double-click tAggregateRow to open its Component view.

  2. Click the [...] button next to Edit schema to open the schema editor.

  3. In the table of the tAggregateRow schema, click the [+] button once to add one row and in the Column column, rename it to avg.

  4. In the Type column of the salary row, select Double.

  5. Click OK to validate these changes and accept the propagation prompted by the pop-up dialog box.

  6. Under the Operations table, click the [+] button to add one row and configure the following columns of this row to define the calculation of the average salary.

    • Output column: select the column of the output schema in which the average salary is stored. In this scenario, it is avg.

    • Function: select the avg function to calculate the average.

    • Input column position: select the column of the input schema used to provide the source data of the calculation.

Setting the avg variable

  1. Double-click tGlobalVarLoad to open its Component view.

  2. Click the Sync columns button to ensure that this component retrieves the avg column of the tAggregateRow component's schema. This way the tGlobalVarLoad component defines the avg variable using the calculated average salary.

Filtering the salary records

  1. Double-click tMap to open the map editor.

    Note that the tHDFSInput component linked to this tMap has been configured along with the other tHDFSInput component linked to tAggregateRow.

  2. From the table representing the input flow (on the left side), select all the three columns and drop them to the table representing the output flow (on the right side).

  3. On the table of the input flow, click the button to display the filter expression panel.

  4. In this filter expression panel, enter

    row5.salary > Double.valueOf(String.valueOf(globalMap.get("avg"))) 

    This expression allows the tMap component to select only the salaries above the average calculated by tAggregateRow.

    Note that the row5 in this expression is the ID of the input row to the tMap component and therefore, it might be another value in your scenario.

  5. Click Apply and then OK to validate these changes.

Executing the Job

Then you can run this Job.

The tLogRow component is used to present the execution result of the Job.

  1. If you want to configure the presentation mode on its Component view, double-click the tLogRow component to open the Component view and then in the Mode area, select the Table (print values in cells of a table) option.

  2. Press F6 to run this Job.

Once done, the Run view is opened automatically, where you can check the execution result.

As presented at the beginning of this scenario, the average salary of the sample data is 2950, and you can read that the salary records above the average have been filtered from the sample data.