Skip to main content

Matching data through multiple passes using Map/Reduce components

Availability-noteDeprecated

The MapReduce framework is deprecated from Talend 7.3 onwards. Use Talend Jobs for Apache Spark to accomplish your integration tasks.

This scenario applies only to Talend Platform products with Big Data and Talend Data Fabric.

Note that Talend Map/Reduce components are available only to users who subscribed to Big Data.

This scenario shows how to create a Talend Map/Reduce Job to match data by using Map/Reduce components. It generates Map/Reduce code and runs right in Hadoop.

The Job in this scenario, groups similar customer records by running through two subsequent matching passes (tMatchGroup components) and outputs the calculated matches in groups. Each pass provides its matches to the pass that follows in order for the latter to add more matches identified with new rules and blocking keys.

This Job is a duplication of the Standard data integration Job described in Matching customer data through multiple passes where standard components are replaced with Map/Reduce components.

You can use Talend Studio to automatically convert the standard Job in the previous section to a Map/Reduce Job. This way, you do not need to redefine the settings of the components in the Job.

Before starting to replicate this scenario, ensure that you have appropriate rights and permissions to access the Hadoop distribution to be used.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – let us know how we can improve!