tUniqRow - 6.3

Talend Open Studio Components Reference Guide

English (United States)
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for Data Quality
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Studio
Data Governance
Data Quality and Preparation
Design and Development


Compares entries and sorts out duplicate entries from the input flow.


Ensures data quality of input or output flow in a Job.

tUniqRow Properties

Component family

Data Quality


Basic settings

Schema et Edit schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.



Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.



Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.


Unique key

In this area, select one or more columns to carry out deduplication on the particular column(s)

- Select the Key attribute check box to carry out deduplication on all the columns

- Select the Case sensitive check box to differentiate upper case and lower case

Advanced settings

Only once each duplicated key

Select this check box if you want to have only the first duplicated entry in the column(s) defined as key(s) sent to the output flow for duplicates.


Use of disk (suitable for processing large row set)

Select this check box to enable generating temporary files on the hard disk when processing a large amount of data. This helps to prevent Job execution failure caused by memory overflow. With this check box selected, you need also to define:

- Buffer size in memory: Select the number of rows that can be buffered in the memory before a temporary file is to be generated on the hard disk.

- Directory for temp files: Set the location where the temporary files should be stored.


Make sure that you specify an existing directory for temporary files; otherwise your Job execution will fail.


Ignore trailing zeros for BigDecimal

Select this check box to ignore trailing zeros for BigDecimal data.


tStatCatcher Statistics

Select this check box to gather the job processing metadata at a job level as well as at each component level.

Global Variables

NB_UNIQUES: the number of unique rows. This is an After variable and it returns an integer.

NB_DUPLICATES: the number of duplicate rows. This is an After variable and it returns an integer.

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.


This component handles flow of data therefore it requires input and output, hence is defined as an intermediary step.


If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.



Scenario 1: Deduplicating entries

In this five-component Job, we will sort entries on an input name list, find out duplicated names, and display the unique names and the duplicated names on the Run console.

Setting up the Job

  1. Drop a tFileInputDelimited, a tSortRow, a tUniqRow, and two tLogRow components from the Palette to the design workspace, and name the components as shown above.

  2. Connect the tFileInputDelimited component, the tSortRow component, and the tUniqRow component using Row > Main connections.

  3. Connect the tUniqRow component and the first tLogRow component using a Main > Uniques connection.

  4. Connect the tUniqRow component and the second tLogRow component using a Main > Duplicates connection.

Configuring the components

  1. Double-click the tFileInputDelimited component to display its Basic settings view.

  2. Click the [...] button next to the File Name field to browse to your input file.

  3. Define the header and footer rows. In this use case, the first row of the input file is the header row.

  4. Click Edit schema to define the schema for this component. In this use case, the input file has five columns: Id, FirstName, LastName, Age, and City. Then click OK to propagate the schema and close the schema editor.

  5. Double-click the tSortRow component to display its Basic settings view.

  6. To rearrange the entries in the alphabetic order of the names, add two rows in the Criteria table by clicking the plus button, select the FirstName and LastName columns under Schema column, select alpha as the sorting type, and select the sorting order.

  7. Double-click the tUniqRow component to display its Basic settings view.

  8. In the Unique key area, select the columns on which you want deduplication to be carried out. In this use case, you will sort out duplicated names.

  9. In the Basic settings view of each of the tLogRow components, select the Table option to view the Job execution result in table mode.

Saving and executing the Job

  1. Press Ctrl+S to save your Job.

  2. Run the Job by pressing F6 or clicking the Run button on the Run tab.

    The unique names and duplicated names are displayed in different tables on the Run console.