tFileInputPositional properties for Apache Spark Batch - 7.0

Positional

author
Talend Documentation Team
EnrichVersion
7.0
EnrichProdName
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Real-Time Big Data Platform
task
Data Governance > Third-party systems > File components (Integration) > Positional components
Data Quality and Preparation > Third-party systems > File components (Integration) > Positional components
Design and Development > Third-party systems > File components (Integration) > Positional components
EnrichPlatform
Talend Studio

These properties are used to configure tFileInputPositional running in the Spark Batch Job framework.

The Spark Batch tFileInputPositional component belongs to the File family.

The component in this framework is available in all subscription-based Talend products with Big Data and Talend Data Fabric.

Basic settings

Define a storage configuration component

Select the configuration component to be used to provide the configuration information for the connection to the target file system such as HDFS.

If you leave this check box clear, the target file system is the local system.

The configuration component to be used must be present in the same Job. For example, if you have dropped a tHDFSConfiguration component in the Job, you can select it to write the result in a given HDFS system.

Property type

Either Built-In or Repository.

 

Built-In: No property data stored centrally.

 

Repository: Select the repository file where the properties are stored.

The properties are stored centrally under the Hadoop Cluster node of the Repository tree.

The fields that come after are pre-filled in using the fetched data.

For further information about the Hadoop Cluster node, see the Getting Started Guide.

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields.

Click Edit schema to make changes to the schema. Note that if you make changes, the schema automatically becomes built-in.

 

Built-In: You create and store the schema locally for this component only.

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs.

Folder/File

Browse to, or enter the path pointing to the data to be used in the file system.

If the path you set points to a folder, this component will read all of the files stored in that folder, for example, /user/talend/in; if sub-folders exist, the sub-folders are automatically ignored unless you define the property spark.hadoop.mapreduce.input.fileinputformat.input.dir.recursive to be true in the Advanced properties table in the Spark configuration tab.

If you want to specify more than one files or directories in this field, separate each path using a comma (,).

If the file to be read is a compressed one, enter the file name with its extension; then tFileInputPositional automatically decompresses it at runtime. The supported compression formats and their corresponding extensions are:

  • DEFLATE: *.deflate

  • gzip: *.gz

  • bzip2: *.bz2

  • LZO: *.lzo

Warning: Use absolute path (instead of relative path) for this field to avoid possible errors.

The button for browsing does not work with the Spark Local mode; if you are using the Spark Yarn or the Spark Standalone mode, ensure that you have properly configured the connection in a configuration component in the same Job, such as tHDFSConfiguration.

Die on error

Select the check box to stop the execution of the Job when an error occurs.

Row separator

The separator used to identify the end of a row.

Customize

Select this check box to customize the data format of the positional file and define the table columns:

Column: Select the column you want to customize.

Size: Enter the column size.

Padding char: Enter, between double quotation marks, the padding charater you need to remove from the field. A space by default.

Alignment: Select the appropriate alignment parameter.

Pattern

Enter between double quotes the length values separated by commas, interpreted as a string. Make sure the values entered in this field are consistent with the schema defined.

Header

Enter the number of rows to be skipped in the beginning of file.

For example, enter 0 to ignore no rows for the data without header and set 1 for the data with header at the first row.

Skip empty rows

Select this check box to skip the empty rows.

Advanced settings

Set minimum partitions

Select this check box to control the number of partitions to be created from the input data over the default partitioning behavior of Spark.

In the displayed field, enter, without quotation marks, the minimum number of partitions you want to obtain.

When you want to control the partition number, you can generally set at least as many partitions as the number of executors for parallelism, while bearing in mind the available memory and the data transfer pressure on your network.

Custom Encoding

You may encounter encoding issues when you process the stored data. In that situation, select this check box to display the Encoding list.

Select the encoding from the list or select Custom and define it manually.

Advanced separator (for number)

Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.).

Trim columns

Select this check box to remove the leading and trailing whitespaces from all columns. When this check box is cleared, the Check column to trim table is displayed, which lets you select particular columns to trim.

Usage

Usage rule

This component is used as a start component and requires an output link..

This component, along with the Spark Batch component Palette it belongs to, appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs.

Spark Connection

In the Spark Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
  • Yarn mode (Yarn client or Yarn cluster):
    • When using Google Dataproc, specify a bucket in the Google Storage staging bucket field in the Spark configuration tab.

    • When using HDInsight, specify the blob to be used for Job deployment in the Windows Azure Storage configuration area in the Spark configuration tab.

    • When using Altus, specify the S3 bucket or the Azure Data Lake store (technical preview) for Job deployment in the Spark configuration tab.
    • When using other distributions, use the configuration component corresponding to the file system your cluster is using. Typically, this system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the configuration component corresponding to the file system your cluster is using, such as tHDFSConfiguration or tS3Configuration.

This connection is effective on a per-Job basis.