tFixedFlowInput properties for Apache Spark Batch - Cloud - 8.0

tFixedFlowInput

Version
Cloud
8.0
Language
English
Product
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Real-Time Big Data Platform
Module
Talend Studio
Content
Data Governance > Third-party systems > Misc components > tFixedFlowInput
Data Quality and Preparation > Third-party systems > Misc components > tFixedFlowInput
Design and Development > Third-party systems > Misc components > tFixedFlowInput
Last publication date
2024-02-20

These properties are used to configure tFixedFlowInput running in the Spark Batch Job framework.

The Spark Batch tFixedFlowInput component belongs to the Misc family.

The component in this framework is available in all subscription-based Talend products with Big Data and Talend Data Fabric.

Basic settings

Schema and Edit Schema

A schema is a row description, it defines the number of fields that will be processed and passed on to the next component. The schema is either built-in or remote in the Repository.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion.

    If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the Repository Content window.

 

Built-in: The schema will be created and stored locally for this component only. For more information about a component schema in its Basic settings tab, see Basic settings tab.

 

Repository: You have already created the schema and stored it in the Repository, hence can be reused in various projects and Job designs. For more information about a component schema in its Basic settings tab, see Basic settings tab.

Mode

From the three options, select the mode that you want to use.

Use Single Table : Enter the data that you want to generate in the relevant value field.

Use Inline Table : Add the row(s) that you want to generate.

Use Inline Content : Enter the data that you want to generate, separated by the separators that you have already defined in the Row and Field Separator fields.

Number of rows

Enter the number of lines to be generated.

Values

Between inverted commas, enter the values corresponding to the columns you defined in the schema dialog box via the Edit schema button.

Advanced settings

Set the number of partitions

Select this check box and then enter the number of partitions into which you want to dispatch the input rows.

If you leave this check box clear, each input row forms a partition. For example, with 5 in the Number of rows field, each row is handled as one partition and thus they make 5 partitions in total.

Usage

Usage rule

This component is used as a start component and requires an output link.

This component, along with the Spark Batch component Palette it belongs to, appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs.

Spark Connection

In the Spark Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
  • Yarn mode (Yarn client or Yarn cluster):
    • When using Google Dataproc, specify a bucket in the Google Storage staging bucket field in the Spark configuration tab.

    • When using HDInsight, specify the blob to be used for Job deployment in the Windows Azure Storage configuration area in the Spark configuration tab.

    • When using Altus, specify the S3 bucket or the Azure Data Lake Storage for Job deployment in the Spark configuration tab.
    • When using on-premises distributions, use the configuration component corresponding to the file system your cluster is using. Typically, this system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the configuration component corresponding to the file system your cluster is using, such as tHDFSConfiguration Apache Spark Batch or tS3Configuration Apache Spark Batch.

    If you are using Databricks without any configuration component present in your Job, your business data is written directly in DBFS (Databricks Filesystem).

This connection is effective on a per-Job basis.