Component-specific settings - 6.5

Talend Job Script Reference Guide

Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Real-Time Big Data Platform
Talend CommandLine
Talend Studio
Design and Development > Designing Jobs

The following table describes the Job script functions and parameters that you can define in the setSettings {} function of the component.

Function/parameter Description Mandatory?


Specify the query language you want tSqlRow to use. Acceptable values:

  • SQLContext: Specify the Spark native query language.
  • HiveContext: Specify the Hive query language supported by Spark.



Enter your query paying particularly attention to properly sequence the fields in order to match the schema definition.

The tSqlRow component uses the label of its input link to name the registered table that stores the datasets from the same input link. For example, if the input link is labeled row1, this row1 is the name of the table in which you can perform queries.



In this function use the JAR_NAME parameter to add the Spark SQL or Hive SQL user-defined function (UDF) jars you want tSqlRow to use.

If you do not want to call your UDF using its fully qualified class name (FQCN), you must define a function alias for this UDF in TEMP_SQL_UDF_FUNCTIONS {} functions and use this alias. It is recommended to use the alias approach, as an alias is often more practical to use to call a UDF from the query.



Include in this function the TEMPORARY_FUNCTION_ALIAS and UDF_FQCN parameters to give each imported UDF class a temporary function name to be used in the query in tSqlRow. If you have specified the SQL Spark Context by setting the SQL_CONTEXT parameter to SQLContext, you need also to include in this function the DATA_TYPE parameter to specify the data type of the output of the Spark SQL UDF to be used.



Use this parameter to specify a text label for the component.