Design the data flow of the Kudu Job - Cloud - 8.0

Kudu

Version
Cloud
8.0
Language
English
Product
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Real-Time Big Data Platform
Module
Talend Studio
Content
Data Governance > Third-party systems > Database components (Integration) > Kudu components
Data Quality and Preparation > Third-party systems > Database components (Integration) > Kudu components
Design and Development > Third-party systems > Database components (Integration) > Kudu components
Last publication date
2024-02-20

Procedure

  1. In the Integration perspective of Talend Studio, create an empty Spark Batch Job from the Job Designs node in the Repository tree view.
    For further information about how to create a Spark Batch Job, see Creating a Spark Job.
  2. In the workspace, enter the name of the component to be used and select this component from the list that appears. In this scenario, the components are tHDFSConfiguration, tKuduConfiguration, tFixedFlowInput, tKuduOutput, tKuduInput and tLogRow.
    The tFixedFlowInput component is used to load the sample data into the data flow. In the real-world practice, you can use other components such as tFileInputDelimited, alone or even with a tMap, in the place of tFixedFlowInput to design a sophisticated process to prepare your data to be processed.
  3. Connect tFixedFlowInput to tKuduOutput using the Row > Main link.
  4. Connect tMongoDBInput to tLogRow using the Row > Main link.
  5. Connect tFixedFlowInput to tMongoDBInput using the Trigger > OnSubjobOk link.
  6. Leave tHDFSConfiguration and tKuduConfiguration alone without any connection.