tPigReplicate - 6.1

Talend Components Reference Guide

EnrichVersion
6.1
EnrichProdName
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for Data Quality
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Real-Time Big Data Platform
task
Data Governance
Data Quality and Preparation
Design and Development
EnrichPlatform
Talend Studio

Warning

This component will be available in the Palette of Talend Studio on the condition that you have subscribed to one of the Talend solutions with Big Data.

tPigReplicate Properties

Component family

Big Data / Pig

 

Function

The tPigReplicate is used after an input Pig component, this component duplicates the incoming schema into as many identical output flows as needed.

Purpose

This component allows you to perform different operations on the same schema.

Basic settings

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Since version 5.6, both the Built-In mode and the Repository mode are available in any of the Talend solutions.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

Click Sync columns to retrieve the schema from the previous component connected in the Job.

 

 

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

 

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

 Advanced settings

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at the Job level as well as at each component level.

Global Variables

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.

Usage

This component is not startable (green background); it requires tPigLoad as the input component and expects other Pig components to handle its output flow(s).

Prerequisites

The Hadoop distribution must be properly installed, so as to guarantee the interaction with Talend Studio. The following list presents MapR related information for example.

  • Ensure that you have installed the MapR client in the machine where the Studio is, and added the MapR client library to the PATH variable of that machine. According to MapR's documentation, the library or libraries of a MapR client corresponding to each OS version can be found under MAPR_INSTALL\ hadoop\hadoop-VERSION\lib\native. For example, the library for Windows is \lib\native\MapRClient.dll in the MapR client jar file. For further information, see the following link from MapR: http://www.mapr.com/blog/basic-notes-on-configuring-eclipse-as-a-hadoop-development-environment-for-mapr.

    Without adding the specified library or libraries, you may encounter the following error: no MapRClient in java.library.path.

  • Set the -Djava.library.path argument, for example, in the Job Run VM arguments area of the Run/Debug view in the [Preferences] dialog box. This argument provides to the Studio the path to the native library of that MapR client. This allows the subscription-based users to make full use of the Data viewer to view locally in the Studio the data stored in MapR. For further information about how to set this argument, see the section describing how to view data of Talend Big Data Getting Started Guide.

For further information about how to install a Hadoop distribution, see the manuals corresponding to the Hadoop distribution you are using.

Connections

Outgoing links (from this component to another):

Row: Pig combine. This link joins all data processes designed in the Job and executes them simultaneously.

Incoming links (from one component to this one):

Row: Pig combine.

For further information regarding connections, see Talend Studio User Guide.

Log4j

If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Scenario: Replicating a flow and sorting two identical flows respectively

The Job in this scenario uses Pig components to handle names and states loaded from a given HDFS system. It reads and replicates the input flow, then sorts the two identical flows based on name and state respectively, and writes the results back into that HDFS.

Before starting to replicate this Job, ensure that you have the appropriate right to read and write data in the Hadoop distribution to be used and that Pig is properly installed in that distribution.

Linking the components

  1. In the Integration perspective of Talend Studio, create an empty Job, named Replicate for example, from the Job Designs node in the Repository tree view.

    For further information about how to create a Job, see the Talend Studio User Guide.

  2. Drop tPigLoad, tPigReplicate, two tPigSort and two tPigStoreResult onto the workspace.

    The tPigLoad component reads data from the given HDFS system. The sample data used in this scenario reads as follows:

    Andrew Kennedy;Mississippi
    Benjamin Carter;Louisiana
    Benjamin Monroe;West Virginia
    Bill Harrison;Tennessee
    Calvin Grant;Virginia
    Chester Harrison;Rhode Island
    Chester Hoover;Kansas
    Chester Kennedy;Maryland
    Chester Polk;Indiana
    Dwight Nixon;Nevada
    Dwight Roosevelt;Mississippi
    Franklin Grant;Nebraska

    The location of the data in this scenario is /user/ychen/raw/Name&State.csv.

  3. Connect them using the Row > Pig combine links.

Configuring tPigLoad

  1. Double-click tPigLoad to open its Component view.

  2. Click the button next to Edit schema to open the schema editor.

  3. Click the button twice to add two rows and name them Name and State, respectively.

  4. Click OK to validate these changes and accept the propagation prompted by the pop-up dialog box.

  5. In the Mode area, select Map/Reduce because the Hadoop to be used in this scenario is installed in a remote machine. Once selecting it, the parameters to be set appear.

  6. In the Distribution and the Version lists, select the Hadoop distribution to be used.

  7. In the Load function list, select PigStorage

  8. In the NameNode URI field and the JobTracker host field, enter the locations of the NameNode and the JobTracker to be used for Map/Reduce, respectively.

  9. In the Input file URI field, enter the location of the data to be read from HDFS. In this example, the location is /user/ychen/raw/NameState.csv.

  10. In the Field separator field, enter the semicolon ;.

Configuring tPigReplicate

  1. Double-click tPigReplicate to open its Component view.

  2. Click the button next to Edit schema to open the schema editor to verify whether its schema is identical with that of its preceding component.

    Note

    If this component does not have the same schema of the preceding component, a warning icon appears. In this case, click the Sync columns button to retrieve the schema from the preceding one and once done, the warning icon disappears.

Configuring tPigSort

Two tPigSort components are used to sort the two identical output flows: one based on the Name column and the other on the State column.

  1. Double-click the first tPigSort component to open its Component view to define the sorting by name.

  2. In the Sort key table, add one row by clicking the button under this table.

  3. In the Column column, select Name from the drop-down list and select ASC in the Order column.

  4. Double-click the other tPigSort to open its Component view to define the sorting by state.

  5. In the Sort key table, add one row, then select Name from the drop-down list in the Column column and select ASC in the Order column.

Configuring tPigStoreResult

Two tPigStoreResult components are used to write each of the sorted data into HDFS.

  1. Double-click either the first tPigStoreResult component to open its Component view to write the data sorted by name.

  2. In the Result file field, enter the directory where the data will be written. This directory will be created if it does not exist. In this scenario, we put /user/ychen/sort/tPigreplicate/byName.csv.

  3. Select Remove result directory if exists.

  4. In the Store function list, select PigStorage.

  5. In the Field separator field, enter the semicolon ;.

  6. Do the same for the other tPigStoreResult component but set another directory for the data sorted by state. In this scenario, it is /user/ychen/sort/tPigreplicate/byState.csv.

Executing the Job

Then you can run this Job.

  • Press F6 to run this Job.

Once done, browse to the locations where the results were written in HDFS.

The following image presents the results sorted by name:

The following image presents the results sorted by state:

If you need to obtain more details about the Job, it is recommended to use the web console of the Jobtracker provided by the Hadoop distribution you are using.

In JobHistory, you can easily find the execution status of your Pig Job because the name of the Job is automatically created by concatenating the name of the project that contains the Job, the name and version of the Job itself and the label of the first tPigLoad component used in it. The naming convention of a Pig Job in JobHistory is ProjectName_JobNameVersion_FirstComponentName.