Rearranging the components - 7.3

Processing (Integration)

Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for ESB
Talend Real-Time Big Data Platform
Talend Studio
Data Governance > Third-party systems > Processing components (Integration)
Data Quality and Preparation > Third-party systems > Processing components (Integration)
Design and Development > Third-party systems > Processing components (Integration)
Last publication date


  1. Double-click this new Map/Reduce Job to open it in the workspace. The Map/Reduce components' Palette is opened accordingly and in the workspace, the crossed-out components, if any, indicate that those components do not have the Map/Reduce version.
  2. Right-click each of those components in question and select Delete to remove them from the workspace.
  3. Drop a tHDFSInput component, a tHDFSOutput component and a tJDBCOutput component in the workspace. The tHDFSInput component reads data from the Hadoop distribution to be used, the tHDFSOutput component writes data in that distribution and the tJDBCOutput component writes data in a given database, for example, a MySQL database in this scenario. The two output components replace the two tLogRow components to output data.
    If from scratch, you have to drop a tSortRow component and a tUniqRow component, too.
  4. Connect tHDFSInput to tSortRow using the Row > Main link and accept to get the schema of tSortRow.
  5. Connect tUniqRow to tHDFSOutput using Row > Uniques and to tJDBCOutput using Row > Duplicates.