tWindow properties for Apache Spark Streaming - 7.1

Processing (Integration)

Version
7.1
Language
English (United States)
Product
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Real-Time Big Data Platform
Module
Talend Studio
Content
Data Governance > Third-party systems > Processing components (Integration)
Data Quality and Preparation > Third-party systems > Processing components (Integration)
Design and Development > Third-party systems > Processing components (Integration)

These properties are used to configure tWindow running in the Spark Streaming Job framework.

The Spark Streaming tWindow component belongs to the Processing family.

The streaming version of this component is available in Talend Real Time Big Data Platform and in Talend Data Fabric.

Basic settings

Window duration

Enter, without quotation marks, the duration (in milliseconds) that defines the length of the window to be applied.

For example, if the batch size defined in the Spark configuration tab is 2 seconds, a window duration of 6 seconds means that 3 batches are handled each time this window is applied.

Define the slide duration

Select the Define the slide duration check box and in the field that is displayed, enter, without quotation marks, the time in milliseconds at the end of which the window is to be applied.

For example, if the batch size defined in the Spark configuration tab is 2 seconds, a slide duration of 4 seconds means the window is applied every 4 seconds; and if the window duration is 6 seconds, after two window applications there will be the overlap of one batch.

If you leave this check box clear, the slide duration is assumed to be the batch size defined in the Spark configuration tab.

Both the window duration and the slide duration must be multiples of the batch size defined in the Spark configuration tab.

Usage

Usage rule

This component is used as an intermediate step.

This component does not change the data schema but controls the pace of the processing of the micro-batches via the specific window.

This component, along with the Spark Streaming component Palette it belongs to, appears only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs.

Spark Connection

In the Spark Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent JAR files for execution, you must specify the directory in the file system to which these JAR files are transferred so that Spark can access these files:
  • Yarn mode (YARN client or YARN cluster):
    • When using Google Dataproc, specify a bucket in the Google Storage staging bucket field in the Spark configuration tab.

    • When using HDInsight, specify the blob to be used for Job deployment in the Windows Azure Storage configuration area in the Spark configuration tab.

    • When using Altus, specify the S3 bucket or the Azure Data Lake Storage for Job deployment in the Spark configuration tab.
    • When using Qubole, add a tS3Configuration to your Job to write your actual business data in the S3 system with Qubole. Without tS3Configuration, this business data is written in the Qubole HDFS system and destroyed once you shut down your cluster.
    • When using on-premise distributions, use the configuration component corresponding to the file system your cluster is using. Typically, this system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the configuration component corresponding to the file system your cluster is using, such as tHDFSConfiguration or tS3Configuration.

    If you are using Databricks without any configuration component present in your Job, your business data is written directly in DBFS (Databricks Filesystem).

This connection is effective on a per-Job basis.