Creating a clean data set from the suspect pairs labeled by tMatchPredict and the unique rows computed by tMatchPairing

Deduplication

author
Talend Documentation Team
EnrichVersion
6.5
EnrichProdName
Talend Big Data Platform
Talend Big Data
Talend Open Studio for Big Data
Talend Data Management Platform
Talend Real-Time Big Data Platform
Talend Data Integration
Talend ESB
Talend Data Services Platform
Talend Open Studio for Data Integration
Talend Open Studio for ESB
Talend MDM Platform
Talend Data Fabric
Talend Open Studio for MDM
task
Design and Development > Third-party systems > Data Quality components > Deduplication components
Data Governance > Third-party systems > Data Quality components > Deduplication components
Data Quality and Preparation > Third-party systems > Data Quality components > Deduplication components
EnrichPlatform
Talend Studio

This scenario applies only to subscription-based Talend Platform products with Big Data and Talend Data Fabric.

For more technologies supported by Talend, see Talend components.

In this example, there are two sources of input data:
The use case described here uses two subjobs:
  • In the first subjob, tRuleSurvivorship processes the records labeled as duplicates and grouped by tMatchPredict, to create one single representation of each duplicates group.

  • In the second subjob, tUnite merges the survivors and the unique rows to create a clean and deduplicated data set to be used with the tMatchIndex component.

The output file contains clean and deduplicated data. You can index this reference data set in ElasticSearch using the tMatchIndex component.