How a Talend Job for Apache Spark works - 7.1

Talend Big Data Studio User Guide

Version
7.1
Language
English (United States)
Product
Talend Big Data
Module
Talend Studio
Content
Design and Development
Using the Spark-specific components, a Talend Spark Job makes use of the Spark framework to process RDDs (Resilient Distributed Datasets) on top of a given Spark cluster.

A Talend Spark Job can be run in any of the following modes:

  • Local: the Studio builds the Spark environment in itself at runtime to run the Job locally within the Studio. With this mode, each processor of the local machine is used as a Spark worker to perform the computations. This mode requires minimum parameters to be set in this configuration view.

    Note this local machine is the machine in which the Job is actually run.

  • Standalone: the Studio connects to a Spark-enabled cluster to run the Job from this cluster.