tPigJoin - 6.1

Talend Components Reference Guide

EnrichVersion
6.1
EnrichProdName
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for Data Quality
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Real-Time Big Data Platform
task
Data Governance
Data Quality and Preparation
Design and Development
EnrichPlatform
Talend Studio

Warning

This component will be available in the Palette of the studio on the condition that you have subscribed to one of the Talend solutions with Big Data.

tPigJoin Properties

Component family

Big Data / Hadoop

 

Function

This component allows you to perform join of two files based on join keys.

Purpose

The tPigJoin component is used to perform inner joins and outer joins of two files based on join keys to create data that will be used by Pig.

Basic settings

Schema and Edit schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Since version 5.6, both the Built-In mode and the Repository mode are available in any of the Talend solutions.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

Note

To make this component work, two schemas must be set: the schema of the main flow and the schema of the lookup flow. In the output part of the main schema, the columns of the main input file must be manually concatenated with those of the lookup file.

 

 

Built-in: The schema will be created and stored locally for this component only. Related topic: see Talend Studio User Guide.

 

 

Repository: The schema already exists and is stored in the Repository, hence can be reused in various projects and Job designs. Related topic: see Talend Studio User Guide.

Reference file

Schema and Edit schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Since version 5.6, both the Built-In mode and the Repository mode are available in any of the Talend solutions.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

Note

To make this component work, two schemas must be set: the schema of the main flow and the schema of the lookup flow. In the output part of the main schema, the columns of the main input file must be manually concatenated with those of the lookup file.

 

 

Built-in: The schema will be created and stored locally for this component only. Related topic: see Talend Studio User Guide.

 

 

Repository: The schema already exists and is stored in the Repository, hence can be reused in various projects and Job designs. Related topic: see Talend Studio User Guide.

 

Filename

Fill in the path of the Lookup file.

 

Field Separator

Enter character, string or regular expression to separate fields for the transferred data.

 

Join key

Click the plus button to add lines to set the Join key for Input file and Lookup file.

 

Join mode

Select a join mode from the list:

inner-join: Select this mode to perform an inner join of two or more relations based on Join keys.

left-outer-join: Select this mode to performs a left outer join of two or more relations based on Join keys.

right-outer-join: Select this mode to performs a right outer join of two or more relations based on Join keys.

full-outer-join: Select this mode to combine the effect of applying both left and right outer joins.

For further information about inner join and outer join, see:

http://en.wikipedia.org/wiki/Join_%28SQL%29

Advanced settings

Optimize the join

Select this check box to optimize the performance of joins using REPLICATED, SKEWED, or MERGE joins. For further information about optimized joins, see:

http://pig.apache.org/docs/r0.8.1/piglatin_ref1.html#Specialized+Joins

 

Use partitioner

Select this check box to specify the Hadoop Partitioner that controls the partitioning of the keys of the intermediate map-outputs. For further information about the usage of Hadoop Partitioner, see:

http://hadoop.apache.org/docs/r2.2.0/api/org/apache/hadoop/mapred/Partitioner.html

 

Increase parallelism

Select this check box to set the number of reduce tasks for the MapReduce Jobs

 

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at the Job level as well as at each component level.

Global Variables

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.

Usage

This component is commonly used as intermediate step together with input component and output component.

Prerequisites

The Hadoop distribution must be properly installed, so as to guarantee the interaction with Talend Studio. The following list presents MapR related information for example.

  • Ensure that you have installed the MapR client in the machine where the Studio is, and added the MapR client library to the PATH variable of that machine. According to MapR's documentation, the library or libraries of a MapR client corresponding to each OS version can be found under MAPR_INSTALL\ hadoop\hadoop-VERSION\lib\native. For example, the library for Windows is \lib\native\MapRClient.dll in the MapR client jar file. For further information, see the following link from MapR: http://www.mapr.com/blog/basic-notes-on-configuring-eclipse-as-a-hadoop-development-environment-for-mapr.

    Without adding the specified library or libraries, you may encounter the following error: no MapRClient in java.library.path.

  • Set the -Djava.library.path argument, for example, in the Job Run VM arguments area of the Run/Debug view in the [Preferences] dialog box. This argument provides to the Studio the path to the native library of that MapR client. This allows the subscription-based users to make full use of the Data viewer to view locally in the Studio the data stored in MapR. For further information about how to set this argument, see the section describing how to view data of Talend Big Data Getting Started Guide.

For further information about how to install a Hadoop distribution, see the manuals corresponding to the Hadoop distribution you are using.

Log4j

If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Limitation

Knowledge of Pig scripts is required.

 

Scenario: Joining two files based on an exact match and saving the result to a local file

This scenario describes a four-component Job that combines data of an input file and a reference file that matches a given join key, removes unwanted columns, and then saves the final result to a local file.

The main input file contains the information about people's IDs, first names, last names, group IDs, and salaries, as shown below:

1;Woodrow;Johnson;3;1013.39
2;Millard;Monroe;2;8077.59
3;Calvin;Eisenhower;3;6866.88
4;Lyndon;Wilson;3;5726.28
5;Ronald;Garfield;2;4158.58
6;Rutherford;Buchanan;3;2897.00
7;Calvin;Coolidge;1;6650.66
8;Ulysses;Roosevelt;2;7854.78
9;Grover;Tyler;1;5226.88
10;Bill;Tyler;2;8964.66

The reference file contains only the information of group IDs and group names:

1;group_A
2;group_B

Dropping and linking the components

  1. Drop the following components from the Palette to the design workspace: tPigLoad, tPigJoin, tPigFilterColumns, and tPigStoreResult.

  2. Connect these components in a series using Row > Pig Combine connections.

Configuring the components

Loading the main input file

  1. Double-click tPigLoad to open its Basic settings view.

  2. Click the [...] button next to Edit schema to open the [Schema] dialog box.

  3. Click the [+] button to add columns, name them and define the column types according to the structure of the input file. In this example, the input schema has five columns: id (integer), firstName (string), lastName (string), groupId (integer), and salary (double).

    Then click OK to validate the setting and close the dialog box.

  4. Click Local in the Mode area.

  5. Select PigStorage from the Load function list.

  6. Fill in the Input file URI field with the full path to the input file, and leave the rest of the setting as they are.

Loading the reference file and setting up an inner join

  1. Double-click tPigJoin to open its Basic settings view.

  2. Click the [...] for the main schema to open the [Schema] dialog box.

  3. Check that input schema is correctly retrieved from the preceding component. If needed, click the [->>] button to copy all the columns of the input schema to the output schema.

  4. Click the [+] button under the output panel to add new columns according to the data structure of the reference file, groupId_ref (integer) and groupName (string) in this example. Then click OK to close the dialog box.

  5. Click the [...] for the schema lookup flow to open the [Schema] dialog box.

  6. Click the [+] button under the output panel to add two columns: groupId_ref (integer) and groupName (string), and then click OK to close the dialog box.

  7. In the Filename field, specify the full path to the reference file.

  8. Click the [+] button under the Join key table to add a new line, and select groupId and groupId_ref respectively from the Input and Lookup lists to match data from the main input flow with data from the lookup flow based on the group ID.

  9. From the Join Mode list, select inner-join.

Defining the final output schema and the output file

  1. Double-click tPigFilterColumns to open its Basic settings view.

  2. Click the [...] button next to Edit schema to open the [Schema] dialog box.

  3. From the input schema, select the columns you want to include in your result file by clicking them one after another while pressing the Shift key, and click the [->] button to copy them to the output schema. Then, click OK to validate the schema setting and close the dialog box.

    In this example, we want the result file to include all the information except the group IDs.

  4. Double-click tPigStoreResult to open its Basic settings view.

  5. Click Sync columns to retrieve the schema structure from the preceding component.

  6. Fill in the Result file field with the full path to the result file, and select the Remove result file directory if exists check box.

  7. Select PigStorage from the Store function list, and leave rest of the settings as they are.

Saving and executing the Job

  1. Press Ctrl+S to save your Job.

  2. Press F6 or click Run on the Run tab to run the Job.

    The result file includes all the information related to people of group A and group B, except their group IDs.