tFileStreamInputRegex properties for Apache Spark Streaming - 6.5

Regex

author
Talend Documentation Team
EnrichVersion
6.5
EnrichProdName
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Real-Time Big Data Platform
task
Data Governance > Third-party systems > File components (Integration) > Regex components
Data Quality and Preparation > Third-party systems > File components (Integration) > Regex components
Design and Development > Third-party systems > File components (Integration) > Regex components
EnrichPlatform
Talend Studio

These properties are used to configure tFileStreamInputRegex running in the Spark Streaming Job framework.

The Spark Streaming tFileStreamInputRegex component belongs to the File family.

The streaming version of this component is available in Talend Real Time Big Data Platform and in Talend Data Fabric.

Basic settings

Define a storage configuration component

Select the configuration component to be used to provide the configuration information for the connection to the target file system such as HDFS.

If you leave this check box clear, the target file system is the local system.

The configuration component to be used must be present in the same Job. For example, if you have dropped a tHDFSConfiguration component in the Job, you can select it to write the result in a given HDFS system.

Property type

Either Built-In or Repository.

 

Built-In: No property data stored centrally.

 

Repository: Select the repository file where the properties are stored.

Folder/File

Browse to, or enter the path pointing to the data to be used in the file system.

If the path you set points to a folder, this component will read all of the files stored in that folder, for example, /user/talend/in; if sub-folders exist, the sub-folders are automatically ignored unless you define the property spark.hadoop.mapreduce.input.fileinputformat.input.dir.recursive to be true in the Advanced properties table in the Spark configuration tab.

If you want to specify more than one files or directories in this field, separate each path using a comma (,).

If the file to be read is a compressed one, enter the file name with its extension; then ttFileInputRegex automatically decompresses it at runtime. The supported compression formats and their corresponding extensions are:

  • DEFLATE: *.deflate

  • gzip: *.gz

  • bzip2: *.bz2

  • LZO: *.lzo

The button for browsing does not work with the Spark Local mode; if you are using the Spark Yarn or the Spark Standalone mode, ensure that you have properly configured the connection in a configuration component in the same Job, such as tHDFSConfiguration.

Row separator

The separator used to identify the end of a row.

Regex

This field can contain multiple lines. Type in your regular expressions including the subpattern matching the fields to be extracted.

Note: Antislashes need to be doubled in regexp

Warning:

Regex syntax requires double quotes.

Header

Enter the number of rows to be skipped in the beginning of file.

Schema and Edit Schema

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

 

Built-In: You create and store the schema locally for this component only.

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs.

Skip empty rows

Select this check box to skip the empty rows.

Die on error

Select the check box to stop the execution of the Job when an error occurs.

Advanced settings

Encoding

You may encounter encoding issues when you process the stored data. In that situation, select this check box to display the Encoding list.

Select the encoding from the list or select Custom and define it manually.

Usage

Usage rule

This component is used as a start component and requires an output link.

This component, along with the Spark Streaming component Palette it belongs to, appears only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs.

Spark Connection

You need to use the Spark Configuration tab in the Run view to define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
  • Yarn mode: when using Google Dataproc, specify a bucket in the Google Storage staging bucket field in the Spark configuration tab; when using other distributions, use a tHDFSConfiguration component to specify the directory.

  • Standalone mode: you need to choose the configuration component depending on the file system you are using, such as tHDFSConfiguration or tS3Configuration.

This connection is effective on a per-Job basis.