tJava - 6.1

Talend Components Reference Guide

EnrichVersion
6.1
EnrichProdName
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for Data Quality
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Real-Time Big Data Platform
task
Data Governance
Data Quality and Preparation
Design and Development
EnrichPlatform
Talend Studio

Function

tJava enables you to enter personalized code in order to integrate it in Talend program. You can execute this code only once.

Purpose

tJava makes it possible to extend the functionalities of a Talend Job using custom Java commands.

If you have subscribed to one of the Talend solutions with Big Data, this component is available in the following types of Jobs:

tJava properties

Component family

Custom Code

 

Basic settings

Code

Type in the Java code you want to execute according to the task you need to perform. For further information about Java functions syntax specific to Talend, see Talend Studio Help Contents (Help > Developer Guide > API Reference).

For a complete Java reference, check http://docs.oracle.com/javaee/6/api/

Note

This component offers the advantage of the dynamic schema feature. This allows you to retrieve unknown columns from source files or to copy batches of columns from a source with­out mapping each column individually. For further information about dynamic schemas, see Talend Studio User Guide.

Advanced settings

Import

Enter the Java code to import, if necessary, external libraries used in the Code field of the Basic settings view.

 

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at a Job level as well as at each component level.

Global Variables

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.

Usage

This component is generally used as a one-component subjob.

Limitation

You should know Java language.

Scenario: Printing out a variable content

The following scenario is a simple demo of the extended application of the tJava component. The Job aims at printing out the number of lines being processed using a Java command and the global variable provided in Talend Studio.

Setting up the Job

  1. Select and drop the following components from the Palette onto the design workspace: tFileInputDelimited, tFileOutputExcel, tJava.

  2. Connect the tFileInputDelimited to the tFileOutputExcel using a Row Main connection. The content from a delimited txt file will be passed on through the connection to an xls-type of file without further transformation.

  3. Then connect the tFileInputDelimited component to the tJava component using a Trigger > On Subjob Ok link. This link sets a sequence ordering tJava to be executed at the end of the main process.

Configuring the input component

  1. Set the Basic settings of the tFileInputDelimited component.

  2. Define the path to the input file in the File name field.

    The input file used in this example is a simple text file made of two columns: Names and their respective Emails.

  3. Click the Edit Schema button, and set the two-column schema. Then click OK to close the dialog box.

  4. When prompted, click OK to accept the propagation, so that the tFileOutputExcel component gets automatically set with the input schema.

Configuring the output component

Set the output file to receive the input content without changes. If the file does not exist already, it will get created.

In this example, the Sheet name is Email and the Include Header box is selected.

Configuring the tJava component

  1. Then select the tJava component to set the Java command to execute.

  2. In the Code area, type in the following command:

    String var = "Nb of line processed: ";
    var = var + globalMap.get("tFileInputDelimited_1_NB_LINE");
    System.out.println(var);

    In this use case, we use the NB_Line variable. To access the global variable list, press Ctrl + Space bar on your keyboard and select the relevant global parameter.

Job execution

Save your Job and press F6 to execute it.

The content gets passed on to the Excel file defined and the Number of lines processed are displayed on the Run console.

tJava properties in Spark Batch Jobs

Component family

Custom Code

 

 Basic settings

Schema et Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

Note that if the input value of any non-nullable primitive field is null, the row of data including that field will be rejected.

 

 

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

 

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

 

Code

Type in the Java code you want to execute to process the incoming RDD from the input link or even create new RDDs out of this input one.

You need to leverage the schema, the link and the component name to write the custom code. For example, if this component is labeled tJava_1 and the connection to it is labeled row1, then the class of the input RDD is row1Struct and the input RDD itself is available with the rdd_tJava_1 variable.

For more detailed instructions, see the default comment provided in the Code field of this component.

For further information about Spark's Java API, see Apache's Spark documentation in https://spark.apache.org/docs/latest/api/java/index.html.

Advanced settings

Classes

Define the classes that you need to use in the code written in the Code field in the Basic settings view.

It is recommended to define new classes in this field, instead of in the Code field, so as to avoid eventual exceptions in serialization.

Import

Enter the Java code to import, if necessary, external libraries used in the Code field of the Basic settings view.

Usage in Spark Batch Jobs

In a Talend Spark Batch Job, it is used as an end component and requires an input link. The other components used along with it must be Spark Batch components, too. They generate native Spark code that can be executed directly in a Spark cluster.

This component, along with the Spark Batch component Palette it belongs to, appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs.

Spark Connection

You need to use the Spark Configuration tab in the Run view to define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, one and only one file system related component from the Storage family is required in the same Job so that Spark can use this component to connect to the file system to which the jar files dependent on the Job are transferred:

This connection is effective on a per-Job basis.

Code example

In the Code field of the Basic settings view, enter the following code to create an output RDD by using custom transformations on the input RDD. mapInToOut is a class to be defined in the Classes field in the Advanced settings view.

outputrdd_tJava_1 = rdd_tJava_1.map(new mapInToOut(job));

In the Classes field of the Advanced settings view, enter the following code to define the mapInToOut class:

public static class mapInToOut implements
org.apache.spark.api.java.function.Function<inputStruct,RecordOut_tJava_1>{
                                
   private ContextProperties context = null;
   private java.util.List<org.apache.avro.Schema.Field> fieldsList;
                                
   public mapInToOut(JobConf job) {
       this.context = new ContextProperties(job);
   }
                                
   @Override
   public RecordOut_tJava_1 call(inputStruct origStruct) {		
                                
      if (fieldsList == null) {
          this.fieldsList = (new inputStruct()).getSchema()
          .getFields();
      }
                                
      RecordOut_tJava_1 value = new RecordOut_tJava_1();
                                
      for (org.apache.avro.Schema.Field field : fieldsList) {
          value.put(field.pos(), origStruct.get(field.pos()));
      }
                                
      return value;		
                                
    }
}

Log4j

If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Limitation

Knowledge of Spark and Java language is required.

Related scenarios

No scenario is available for the Spark Batch version of this component yet.

tJava properties in Spark Streaming Jobs

Warning

The streaming version of this component is available in the Palette of the studio on the condition that you have subscribed to Talend Real-time Big Data Platform or Talend Data Fabric.

Component family

Custom Code

 

 Basic settings

Schema et Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

Note that if the input value of any non-nullable primitive field is null, the row of data including that field will be rejected.

 

 

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

 

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

 

Code

Type in the Java code you want to execute to process the incoming RDD from the input link or even create new RDDs out of this input one.

You need to leverage the schema, the link and the component name to write the custom code. For example, if this component is labeled tJava_1 and the connection to it is labeled row1, then the class of the input RDD is row1Struct and the input RDD itself is available with the rdd_tJava_1 variable.

For more detailed instructions, see the default comment provided in the Code field of this component.

For further information about Spark's Java API, see Apache's Spark documentation in https://spark.apache.org/docs/latest/api/java/index.html.

Advanced settings

Classes

Define the classes that you need to use in the code written in the Code field in the Basic settings view.

It is recommended to define new classes in this field, instead of in the Code field, so as to avoid eventual exceptions in serialization.

Import

Enter the Java code to import, if necessary, external libraries used in the Code field of the Basic settings view.

Usage in Spark Streaming Jobs

In a Talend Spark Streaming Job, it is used as an end component and requires an input link. The other components used along with it must be Spark Streaming components, too. They generate native Spark code that can be executed directly in a Spark cluster.

This component, along with the Spark Streaming component Palette it belongs to, appears only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs.

Code example

In the Code field of the Basic settings view, enter the following code to create an output RDD by using custom transformations on the input RDD. mapInToOut is a class to be defined in the Classes field in the Advanced settings view.

outputrdd_tJava_1 = rdd_tJava_1.map(new mapInToOut(job));

In the Classes field of the Advanced settings view, enter the following code to define the mapInToOut class:

public static class mapInToOut implements
org.apache.spark.api.java.function.Function<inputStruct,RecordOut_tJava_1>{
                                
   private ContextProperties context = null;
   private java.util.List<org.apache.avro.Schema.Field> fieldsList;
                                
   public mapInToOut(JobConf job) {
       this.context = new ContextProperties(job);
   }
                                
   @Override
   public RecordOut_tJava_1 call(inputStruct origStruct) {		
                                
      if (fieldsList == null) {
          this.fieldsList = (new inputStruct()).getSchema()
          .getFields();
      }
                                
      RecordOut_tJava_1 value = new RecordOut_tJava_1();
                                
      for (org.apache.avro.Schema.Field field : fieldsList) {
          value.put(field.pos(), origStruct.get(field.pos()));
      }
                                
      return value;		
                                
    }
}

Spark Connection

You need to use the Spark Configuration tab in the Run view to define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, one and only one file system related component from the Storage family is required in the same Job so that Spark can use this component to connect to the file system to which the jar files dependent on the Job are transferred:

This connection is effective on a per-Job basis.

Log4j

If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Related scenarios

No scenario is available for the Spark Streaming version of this component yet.