tJava properties for Apache Spark Streaming - 7.3

Java custom code

Version
7.3
Language
English
Product
Talend Big Data
Talend Big Data Platform
Talend Cloud
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Real-Time Big Data Platform
Module
Talend Studio
Content
Data Governance > Third-party systems > Custom code components (Integration) > Java custom code components
Data Quality and Preparation > Third-party systems > Custom code components (Integration) > Java custom code components
Design and Development > Third-party systems > Custom code components (Integration) > Java custom code components
Last publication date
2024-02-21

These properties are used to configure tJava running in the Spark Streaming Job framework.

The Spark Streaming tJava component belongs to the Custom Code family.

This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.

Basic settings

Schema et Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the Repository Content window.

Note that if the input value of any non-nullable primitive field is null, the row of data including that field will be rejected.

 

Built-In: You create and store the schema locally for this component only.

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs.

Code

Type in the Java code you want to execute to process the incoming RDD from the input link or even create new RDDs out of this input one.

You need to leverage the schema, the link and the component name to write the custom code. For example, if this component is labeled tJava_1 and the connection to it is labeled row1, then the class of the input RDD is row1Struct and the input RDD itself is available with the rdd_tJava_1 variable.

For more detailed instructions, see the default comment provided in the Code field of this component.

For further information about Spark Java API, see Apache's Spark documentation in https://spark.apache.org/docs/latest/api/java/index.html.

Advanced settings

Classes

Define the classes that you need to use in the code written in the Code field in the Basic settings view.

It is recommended to define new classes in this field, instead of in the Code field, so as to avoid eventual exceptions in serialization.

Import

Enter the Java code to import, if necessary, external libraries used in the Code field of the Basic settings view.

Usage

Usage rule

This component is used as an end component and requires an input link.

Code example In the Code field of the Basic settings view, enter the following code to create an output RDD by using custom transformations on the input RDD. mapInToOut is a class to be defined in the Classes field in the Advanced settings view.
outputrdd_tJava_1 = rdd_tJava_1.map(new mapInToOut(job));
In the Classes field of the Advanced settings view, enter the following code to define the mapInToOut class:
public static class mapInToOut implements
  org.apache.spark.api.java.function.Function<inputStruct,RecordOut_tJava_1>{

     private ContextProperties context = null;
     private java.util.List<org.apache.avro.Schema.Field> fieldsList;
		
     public mapInToOut(JobConf job) {
	   this.context = new ContextProperties(job);
     }
		
     @Override
     public RecordOut_tJava_1 call(inputStruct origStruct) {		
			
	 if (fieldsList == null) {
	     this.fieldsList = (new inputStruct()).getSchema()
			.getFields();
	 }

	 RecordOut_tJava_1 value = new RecordOut_tJava_1();

	 for (org.apache.avro.Schema.Field field : fieldsList) {
	      value.put(field.pos(), origStruct.get(field.pos()));
	 }

	 return value;		
			
      }
}

Spark Connection

In the Spark Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
  • Yarn mode (Yarn client or Yarn cluster):
    • When using Google Dataproc, specify a bucket in the Google Storage staging bucket field in the Spark configuration tab.

    • When using HDInsight, specify the blob to be used for Job deployment in the Windows Azure Storage configuration area in the Spark configuration tab.

    • When using Altus, specify the S3 bucket or the Azure Data Lake Storage for Job deployment in the Spark configuration tab.
    • When using Qubole, add a tS3Configuration to your Job to write your actual business data in the S3 system with Qubole. Without tS3Configuration, this business data is written in the Qubole HDFS system and destroyed once you shut down your cluster.
    • When using on-premises distributions, use the configuration component corresponding to the file system your cluster is using. Typically, this system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the configuration component corresponding to the file system your cluster is using, such as tHDFSConfiguration Apache Spark Batch or tS3Configuration Apache Spark Batch.

    If you are using Databricks without any configuration component present in your Job, your business data is written directly in DBFS (Databricks Filesystem).

This connection is effective on a per-Job basis.