tExtractDelimitedFields properties for Apache Spark Batch - 7.0

Processing (Integration)

EnrichVersion
7.0
EnrichProdName
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Real-Time Big Data Platform
EnrichPlatform
Talend Studio
task
Data Governance > Third-party systems > Processing components (Integration)
Data Quality and Preparation > Third-party systems > Processing components (Integration)
Design and Development > Third-party systems > Processing components (Integration)

These properties are used to configure tExtractDelimitedFields running in the Spark Batch Job framework.

The Spark Batch tExtractDelimitedFields component belongs to the Processing family.

The component in this framework is available in all subscription-based Talend products with Big Data and Talend Data Fabric.

Basic settings

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields.

Click Edit schema to make changes to the schema. Note that if you make changes, the schema automatically becomes built-in.

 

Built-In: You create and store the schema locally for this component only.

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs.

Prev.Comp.Column list

Select the column you need to extract data from.

Die on error

Select the check box to stop the execution of the Job when an error occurs.

Field separator

Enter character, string or regular expression to separate fields for the transferred data.

CSV options

Select this check box to include CSV specific parameters such as Escape char and Text enclosure.

Advanced settings

Custom Encoding

You may encounter encoding issues when you process the stored data. In that situation, select this check box to display the Encoding list.

Then select the encoding to be used from the list or select Custom and define it manually.

Advanced separator (for number)

Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.).

Trim all columns

Select this check box to remove the leading and trailing whitespaces from all columns. When this check box is cleared, the Check column to trim table is displayed, which lets you select particular columns to trim.

Check column to trim

This table is filled automatically with the schema being used. Select the check box(es) corresponding to the column(s) to be trimmed.

Check each row structure against schema

Select this check box to check whether the total number of columns in each row is consistent with the schema. If not consistent, an error message will be displayed on the console.

Check date

Select this check box to check the date format strictly against the input schema.

Decode String for long, int, short, byte Types

Select this check box if any of your numeric types (long, integer, short, or byte type), will be parsed from a hexadecimal or octal string.

Usage

Usage rule

This component is used as an intermediate step.

This component, along with the Spark Batch component Palette it belongs to, appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs.

Spark Connection

In the Spark Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent JAR files for execution, you must specify the directory in the file system to which these JAR files are transferred so that Spark can access these files:
  • Yarn mode (YARN client or YARN cluster):
    • When using Google Dataproc, specify a bucket in the Google Storage staging bucket field in the Spark configuration tab.

    • When using HDInsight, specify the blob to be used for Job deployment in the Windows Azure Storage configuration area in the Spark configuration tab.

    • When using Altus, specify the S3 bucket or the Azure Data Lake store (technical preview) for Job deployment in the Spark configuration tab.
    • When using other distributions, use the configuration component corresponding to the file system your cluster is using. Typically, this system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the configuration component corresponding to the file system your cluster is using, such as tHDFSConfiguration or tS3Configuration.

This connection is effective on a per-Job basis.