Arranging the data flow - 7.3

Machine Learning

Version
7.3
Language
English
Product
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Real-Time Big Data Platform
Module
Talend Studio
Content
Data Governance > Third-party systems > Machine Learning components
Data Quality and Preparation > Third-party systems > Machine Learning components
Design and Development > Third-party systems > Machine Learning components
Last publication date
2024-02-21

Procedure

  1. In the Integration perspective of the Studio, create an empty Spark Batch Job, named rf_model_creation for example, from the Job Designs node in the Repository tree view.
    For further information about how to create a Spark Batch Job, see the Getting Started Guide of the Studio.
  2. In the workspace, enter the name of the component to be used and select this component from the list that appears. In this scenario, the components are tHDFSConfiguration, tFileInputDelimited, tRandomForestModel component, and 4 tModelEncoder components.
    It is recommended to label the 4 tModelEncoder components to different names so that you can easily recognize the task each of them is used to complete. In this scenario, they are labelled Tokenize, tf, tf_idf and features_assembler, respectively.
  3. Except tHDFSConfiguration, connect the other components using the Row > Main link as is previously displayed in the image.