Scenario: Replicating a flow and sorting two identical flows respectively - 6.1

Talend Components Reference Guide

EnrichVersion
6.1
EnrichProdName
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for Data Quality
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Real-Time Big Data Platform
task
Data Governance
Data Quality and Preparation
Design and Development
EnrichPlatform
Talend Studio

The Job in this scenario uses Pig components to handle names and states loaded from a given HDFS system. It reads and replicates the input flow, then sorts the two identical flows based on name and state respectively, and writes the results back into that HDFS.

Before starting to replicate this Job, ensure that you have the appropriate right to read and write data in the Hadoop distribution to be used and that Pig is properly installed in that distribution.

Linking the components

  1. In the Integration perspective of Talend Studio, create an empty Job, named Replicate for example, from the Job Designs node in the Repository tree view.

    For further information about how to create a Job, see the Talend Studio User Guide.

  2. Drop tPigLoad, tPigReplicate, two tPigSort and two tPigStoreResult onto the workspace.

    The tPigLoad component reads data from the given HDFS system. The sample data used in this scenario reads as follows:

    Andrew Kennedy;Mississippi
    Benjamin Carter;Louisiana
    Benjamin Monroe;West Virginia
    Bill Harrison;Tennessee
    Calvin Grant;Virginia
    Chester Harrison;Rhode Island
    Chester Hoover;Kansas
    Chester Kennedy;Maryland
    Chester Polk;Indiana
    Dwight Nixon;Nevada
    Dwight Roosevelt;Mississippi
    Franklin Grant;Nebraska

    The location of the data in this scenario is /user/ychen/raw/Name&State.csv.

  3. Connect them using the Row > Pig combine links.

Configuring tPigLoad

  1. Double-click tPigLoad to open its Component view.

  2. Click the button next to Edit schema to open the schema editor.

  3. Click the button twice to add two rows and name them Name and State, respectively.

  4. Click OK to validate these changes and accept the propagation prompted by the pop-up dialog box.

  5. In the Mode area, select Map/Reduce because the Hadoop to be used in this scenario is installed in a remote machine. Once selecting it, the parameters to be set appear.

  6. In the Distribution and the Version lists, select the Hadoop distribution to be used.

  7. In the Load function list, select PigStorage

  8. In the NameNode URI field and the JobTracker host field, enter the locations of the NameNode and the JobTracker to be used for Map/Reduce, respectively.

  9. In the Input file URI field, enter the location of the data to be read from HDFS. In this example, the location is /user/ychen/raw/NameState.csv.

  10. In the Field separator field, enter the semicolon ;.

Configuring tPigReplicate

  1. Double-click tPigReplicate to open its Component view.

  2. Click the button next to Edit schema to open the schema editor to verify whether its schema is identical with that of its preceding component.

    Note

    If this component does not have the same schema of the preceding component, a warning icon appears. In this case, click the Sync columns button to retrieve the schema from the preceding one and once done, the warning icon disappears.

Configuring tPigSort

Two tPigSort components are used to sort the two identical output flows: one based on the Name column and the other on the State column.

  1. Double-click the first tPigSort component to open its Component view to define the sorting by name.

  2. In the Sort key table, add one row by clicking the button under this table.

  3. In the Column column, select Name from the drop-down list and select ASC in the Order column.

  4. Double-click the other tPigSort to open its Component view to define the sorting by state.

  5. In the Sort key table, add one row, then select Name from the drop-down list in the Column column and select ASC in the Order column.

Configuring tPigStoreResult

Two tPigStoreResult components are used to write each of the sorted data into HDFS.

  1. Double-click either the first tPigStoreResult component to open its Component view to write the data sorted by name.

  2. In the Result file field, enter the directory where the data will be written. This directory will be created if it does not exist. In this scenario, we put /user/ychen/sort/tPigreplicate/byName.csv.

  3. Select Remove result directory if exists.

  4. In the Store function list, select PigStorage.

  5. In the Field separator field, enter the semicolon ;.

  6. Do the same for the other tPigStoreResult component but set another directory for the data sorted by state. In this scenario, it is /user/ychen/sort/tPigreplicate/byState.csv.

Executing the Job

Then you can run this Job.

  • Press F6 to run this Job.

Once done, browse to the locations where the results were written in HDFS.

The following image presents the results sorted by name:

The following image presents the results sorted by state:

If you need to obtain more details about the Job, it is recommended to use the web console of the Jobtracker provided by the Hadoop distribution you are using.

In JobHistory, you can easily find the execution status of your Pig Job because the name of the Job is automatically created by concatenating the name of the project that contains the Job, the name and version of the Job itself and the label of the first tPigLoad component used in it. The naming convention of a Pig Job in JobHistory is ProjectName_JobNameVersion_FirstComponentName.