How a Talend Job for Apache Spark works - 7.1

Talend Big Data Studio User Guide

author
Talend Documentation Team
EnrichVersion
7.1
EnrichProdName
Talend Big Data
task
Design and Development
EnrichPlatform
Talend Studio
Using the Spark-specific components, a Talend Spark Job makes use of the Spark framework to process RDDs (Resilient Distributed Datasets) on top of a given Spark cluster.

A Talend Spark Job can be run in any of the following modes:

  • Local: the Studio builds the Spark environment in itself at runtime to run the Job locally within the Studio. With this mode, each processor of the local machine is used as a Spark worker to perform the computations. This mode requires minimum parameters to be set in this configuration view.

    Note this local machine is the machine in which the Job is actually run.

  • Standalone: the Studio connects to a Spark-enabled cluster to run the Job from this cluster.

  • Yarn client: the Studio runs the Spark driver to orchestrate how the Job should be performed and then send the orchestration to the Yarn service of a given Hadoop cluster so that the Resource Manager of this Yarn service requests execution resources accordingly.