Matching data through multiple passes using Map/Reduce components - 7.0

Data matching

author
Talend Documentation Team
EnrichVersion
7.0
EnrichProdName
Talend Big Data Platform
Talend Data Fabric
Talend Data Management Platform
Talend Data Services Platform
Talend MDM Platform
Talend Real-Time Big Data Platform
task
Data Governance > Third-party systems > Data Quality components > Matching components > Data matching components
Data Quality and Preparation > Third-party systems > Data Quality components > Matching components > Data matching components
Design and Development > Third-party systems > Data Quality components > Matching components > Data matching components
EnrichPlatform
Talend Studio

This scenario applies only to subscription-based Talend Platform products with Big Data and Talend Data Fabric.

For more technologies supported by Talend, see Talend components.

Note that Talend Map/Reduce components are available only to users who subscribed to Big Data.

This scenario shows how to create a Talend Map/Reduce Job to match data by using Map/Reduce components. It generates Map/Reduce code and runs right in Hadoop.

The Job in this scenario, groups similar customer records by running through two subsequent matching passes (tMatchGroup components) and outputs the calculated matches in groups. Each pass provides its matches to the pass that follows in order for the latter to add more matches identified with new rules and blocking keys.

This Job is a duplication of the Standard data integration Job described in Matching customer data through multiple passes where standard components are replaced with Map/Reduce components.

You can use Talend Studio to automatically convert the standard Job in the previous section to a Map/Reduce Job. This way, you do not need to redefine the settings of the components in the Job.

Before starting to replicate this scenario, ensure that you have appropriate rights and permissions to access the Hadoop distribution to be used.