tLibraryLoad properties for Apache Spark Streaming - 6.4

Library import

author
Talend Documentation Team
EnrichVersion
6.4
EnrichProdName
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Real-Time Big Data Platform
task
Data Governance > Third-party systems > Custom code components (Integration) > Library import component
Data Quality and Preparation > Third-party systems > Custom code components (Integration) > Library import component
Design and Development > Third-party systems > Custom code components (Integration) > Library import component
EnrichPlatform
Talend Studio

These properties are used to configure tLibraryLoad running in the Spark Streaming Job framework.

The Spark Streaming tLibraryLoad component belongs to the Custom Code family.

The component in this framework is available in Talend Real Time Big Data Platform and Talend Data Fabric.

Basic settings

Library

Select the library you want to import from the list, or click on the [...] button to browse to the library in your directory.

Advanced settings

Import

Enter the Java code required to import, if required, the external library used in the code editing field of the Basic settings tab of the components such as tJavaMR in a Map/Reduce Job.

Usage

Usage rule

This component is used with no need to be connected to other components.

This component, along with the Spark Streaming component Palette it belongs to, appears only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs.

Spark Connection

You need to use the Spark Configuration tab in the Run view to define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
  • Yarn mode: when using Google Dataproc, specify a bucket in the Google Storage staging bucket field in the Spark configuration tab; when using other distributions, use a tHDFSConfiguration component to specify the directory.

  • Standalone mode: you need to choose the configuration component depending on the file system you are using, such as tHDFSConfiguration or tS3Configuration.

This connection is effective on a per-Job basis.

Limitation

The library is loaded locally.