tPigFilterRow - 6.3

Talend Open Studio for Big Data Components Reference Guide

EnrichVersion
6.3
EnrichProdName
Talend Open Studio for Big Data
task
Data Governance
Data Quality and Preparation
Design and Development
EnrichPlatform
Talend Studio

Function

The tPigFilterRow component filters or splits the input flow in a Pig process based on conditions set on given column(s).

Purpose

In a Pig process, this component applies filtering conditions on one or more specified columns in order to split or filter data from a relation.

tPigFilterRow Properties

Component family

Big Data / Hadoop

 

Basic settings

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Since version 5.6, both the Built-In mode and the Repository mode are available in any of the Talend solutions.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

 

 

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

 

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

 

Filter configuration

Click the Add button beneath the Filter configuration table to set one or more filter conditions.

Note that when the column to be used by a condition is of the string type, the text to be entered in the Value column must be surrounded by both single and double quotation marks (for example, "'California'"), because the double quotation marks are required by Talend's code generator and the single quotation marks required by Pig's grammar.

Note

This table disappears if you select Use advanced filter.

 

Use advanced filter

Select this check box to define advanced filter condition by entering customized filter expression in the Filter field.

Advanced settings

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at the Job level as well as at each component level.

Global Variables

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.

Usage

This component is commonly used as an intermediate step in a Pig process.

Prerequisites

The Hadoop distribution must be properly installed, so as to guarantee the interaction with Talend Studio. The following list presents MapR related information for example.

  • Ensure that you have installed the MapR client in the machine where the Studio is, and added the MapR client library to the PATH variable of that machine. According to MapR's documentation, the library or libraries of a MapR client corresponding to each OS version can be found under MAPR_INSTALL\ hadoop\hadoop-VERSION\lib\native. For example, the library for Windows is \lib\native\MapRClient.dll in the MapR client jar file. For further information, see the following link from MapR: http://www.mapr.com/blog/basic-notes-on-configuring-eclipse-as-a-hadoop-development-environment-for-mapr.

    Without adding the specified library or libraries, you may encounter the following error: no MapRClient in java.library.path.

  • Set the -Djava.library.path argument, for example, in the Job Run VM arguments area of the Run/Debug view in the [Preferences] dialog box in the Window menu. This argument provides to the Studio the path to the native library of that MapR client. This allows the subscription-based users to make full use of the Data viewer to view locally in the Studio the data stored in MapR.

For further information about how to install a Hadoop distribution, see the manuals corresponding to the Hadoop distribution you are using.

Log4j

If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Limitation

Knowledge of Pig scripts is required.

Scenario: Filtering rows of data based on a condition and saving the result to a local file

This scenario describes a four-component Job that filters a list of customers to find out customers from a particular country, and saves the result list to a local file. Before the input data is filtered, duplicate entries are first removed from the list.

The input file contains three columns: Name, Country, and Age, and it has some duplicate entries, as shown below:

Mario;PuertoRico;49
Mike;USA;22
Ricky;PuertoRico;37
Silvia;Spain;20
Billy;Canada;21
Ricky;PuertoRico;37
Romeo;UK;19
Natasha;Russia;25
Juan;Cuba;23
Bob;Jamaica;55
Mario;PuertoRico;49

Dropping and linking components

  1. Drop the following components from the Palette to the design workspace: tPigLoad, tPigDistinct, tPigFilterRow, and tPigStoreResult.

  2. Right-click tPigLoad, select Row > Pig Combine from the contextual menu, and click tPigDistinct to link these two components.

  3. Repeat this operation to link tPigDistinct to tPigFilterRow, and tPigFilterRow to tPigStoreResult using Row > Pig Combine connections to form a Pig process.

Configuring the components

Loading the input data and removing duplicates

  1. Double-click tPigLoad to open its Basic settings view.

  2. Click the [...] button next to Edit schema to open the [Schema] dialog box.

  3. Click the [+] button to add three columns according to the data structure of the input file: Name (string), Country (string) and Age (integer), and then click OK to save the setting and close the dialog box.

  4. Click Local in the Mode area.

  5. Fill in the Input file URI field with the full path to the input file.

  6. Select PigStorage from the Load function list, and leave rest of the settings as they are.

  7. Double-click tPigDistinct to open its Basic settings view, and click Sync columns to make sure that the input schema structure is correctly propagated from the preceding component.

    This component will remove any duplicates from the data flow.

Configuring the filter

  1. Double-click tPigFilterRow to open its Basic settings view.

  2. Click Sync columns to make sure that the input schema structure is correctly propagated from the preceding component.

  3. Select Use advanced filter and fill in the Filter field with filter expression:

    "Country matches 'PuertoRico'"

    This filter expression selects rows of data that contains "PuertoRico" in the Country column.

Configuring the file output

  1. Double-click tPigStoreResult to open its Basic settings view.

  2. Click Sync columns to make sure that the input schema structure is correctly propagated from the preceding component.

  3. Fill in the Result file field with the full path to the result file.

  4. If the target file already exists, select the Remove result directory if exists check box.

  5. Select PigStorage from the Store function list, and leave rest of the settings as they are.

Saving and executing the Job

  1. Press Ctrl+S to save your Job.

  2. Press F6 or click the Run button on the Run tab to run the Job.

    The result file contains the information of customers from the specified country.