tPredictCluster properties for Apache Spark Batch

Machine Learning

Talend Documentation Team
Talend Real-Time Big Data Platform
Talend Data Fabric
Talend Big Data
Talend Big Data Platform
Data Quality and Preparation > Third-party systems > Machine Learning components
Data Governance > Third-party systems > Machine Learning components
Design and Development > Third-party systems > Machine Learning components
Talend Studio

These properties are used to configure tPredictCluster running in the Spark Batch Job framework.

The Spark Batch tPredictCluster component belongs to the Machine Learning family.

The component in this framework is available in all Talend Platform products with Big Data and in Talend Data Fabric.

Basic settings

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to Repository. When you create a Spark Job, avoid the reserved word line when naming the fields.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

Note that the schema of this component is read-only. Its single column LABEL is used to load the class names from the classifier model for use in the classification process.

Model on filesystem

Select this radio box if the model to be used is stored on a file system. The button for browsing does not work with the Spark Local mode; if you are using the Spark Yarn or the Spark Standalone mode, ensure that you have properly configured the connection in a configuration component in the same Job, such as tHDFSConfiguration.

In the HDFS folder field that is displayed, enter the HDFS URI in which this model is stored.

Model computed in the current Job

Select this radio box and then select the model training component that is used in the same Job to create the model to be used.


Usage rule

This component is used as an intermediate step.

Spark Connection

You need to use the Spark Configuration tab in the Run view to define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
  • Yarn mode: when using Google Dataproc, specify a bucket in the Google Storage staging bucket field in the Spark configuration tab; when using other distributions, use a tHDFSConfiguration component to specify the directory.

  • Standalone mode: you need to choose the configuration component depending on the file system you are using, such as tHDFSConfiguration or tS3Configuration.

This connection is effective on a per-Job basis.