tPigCode - 6.1

Talend Components Reference Guide

Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for Data Quality
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Real-Time Big Data Platform
Talend Studio
Data Governance
Data Quality and Preparation
Design and Development


This component will be available in the Palette of Talend Studio on the condition that you have subscribed to one of the Talend solutions with Big Data.

tPigCode Properties

Component family

Big Data / Hadoop



This component allows you to enter personalized Pig code to integrate it in Talend program. You can execute this code only once.


tPigCode extends the functionalities of a Talend Job through using Pig scripts.

Basic settings

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Since version 5.6, both the Built-In mode and the Repository mode are available in any of the Talend solutions.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.



Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.


Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.



Type in Pig scripts you want to execute depending on the task you need to perform. For further information about Pig functions syntax, see Apache's documentation about Pig UDF:

Pig components output tuples and automatically set up an alias for each tuple. When you use a tuple in your Pig script, you have to enter the right alias.

The alias syntax is Component_ID_rowID_Result, for example, tPigCode_1_row2_Result.

Advanced settings

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at the Job level as well as at each component level.


Enable escape

Select this check box so that you can simply write plain Pig code in the Scripts field without need to bear in mind the escape characters, otherwise required for proper Java code generation.

Global Variables

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.


This component is commonly used as intermediate step together with input component and output component.

A tPigCode component can execute only one Pig Latin statement, therefore, if you need to execute multiple statements, you have to use a corresponding number of tPigCode components to run them, one after another.

If a particular .jar file is required to execute a statement, you need to register that library file via the tPigLoad component that starts the Pig process in question.


The Hadoop distribution must be properly installed, so as to guarantee the interaction with Talend Studio. The following list presents MapR related information for example.

  • Ensure that you have installed the MapR client in the machine where the Studio is, and added the MapR client library to the PATH variable of that machine. According to MapR's documentation, the library or libraries of a MapR client corresponding to each OS version can be found under MAPR_INSTALL\ hadoop\hadoop-VERSION\lib\native. For example, the library for Windows is \lib\native\MapRClient.dll in the MapR client jar file. For further information, see the following link from MapR:

    Without adding the specified library or libraries, you may encounter the following error: no MapRClient in java.library.path.

  • Set the -Djava.library.path argument, for example, in the Job Run VM arguments area of the Run/Debug view in the [Preferences] dialog box. This argument provides to the Studio the path to the native library of that MapR client. This allows the subscription-based users to make full use of the Data viewer to view locally in the Studio the data stored in MapR. For further information about how to set this argument, see the section describing how to view data of Talend Big Data Getting Started Guide.

For further information about how to install a Hadoop distribution, see the manuals corresponding to the Hadoop distribution you are using.


If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at


Knowledge of Pig scripts is required.


Scenario: Selecting a column of data from an input file and store it into a local file

This scenario describes a three-component Job that selects a column of data that matches filter condition defined in tPigCode and stores the result into a local file.

Setting up the Job

  1. Drop the following components from the Palette to the design workspace: tPigCode, tPigLoad, tPigStoreResult.

  2. Right-click tPigLoad to connect it to tPigCode using a Row > Pig Combine connection.

  3. Right-click tPigCode to connect it to tPigStoreResult using a Row > Pig Combine connection.

Loading the data

  1. Double-click tPigLoad to open its Basic settings view.

  2. Click the three-dot button next to Edit schema to add columns for tPigLoad.

  3. Click the plus button to add Name, Country and Age and click OK to save the setting.

  4. Select Local from the Mode area.

  5. Fill in the Input filename field with the full path to the input file.

    In this scenario, the input file is CustomerList which contains rows of names, country names and age.

  6. Select PigStorage from the Load function list.

  7. Leave rest of the settings as they are.

Configuring the tPigCode component

  1. Double-click tPigCode component to open its Basic settings view.

  2. Click Sync columns to retrieve the schema structure from the preceding component.

  3. Fill in the Script Code field with following expression:

    tPigCode_1_row2_RESULT = foreach tPigLoad_1_row1_RESULT generate $0 as name;

    This filter expression selects column Name from CustomerList.

Saving the result data to a local file

  1. Double-click tPigStoreResult to open its Basic settings view.

  2. Click Sync columns to retrieve the schema structure from the preceding component.

  3. Fill in the Result file field with the full path to the result file.

    In this scenario, the result is saved in Result file.

  4. Select Remove result directory if exists.

  5. Select PigStorage from the Store function list.

  6. Leave rest of the settings as they are.

Executing the Job

Save your Job and press F6 to run it.

The Result file is generated containing the selected column of data.