tFileInputRegex - 6.1

Talend Components Reference Guide

EnrichVersion
6.1
EnrichProdName
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for Data Quality
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Real-Time Big Data Platform
task
Data Governance
Data Quality and Preparation
Design and Development
EnrichPlatform
Talend Studio

Function

Powerful feature which can replace number of other components of the File family. Requires some advanced knowledge on regular expression syntax

Purpose

Opens a file and reads it row by row to split them up into fields using regular expressions. Then sends fields as defined in the Schema to the next Job component.

If you have subscribed to one of the Talend solutions with Big Data, this component is available in the following types of Jobs:

tFileInputRegex properties

Component family

File/Input

 

Basic settings

Property type

Either Built-In or Repository.

 

 

Built-In: No property data stored centrally.

 

 

Repository: Select the repository file where the properties are stored.

 

File Name/Stream

File name: Name of the file and/or the variable to be processed

Stream: Data flow to be processed. The data must be added to the flow so that it can be collected by the tFileInputRegex via the INPUT_STREAM variable in the autocompletion list (Ctrl+Space)

For further information about how to define and use a variable in a Job, see Talend Studio User Guide.

 

Row separator

Enter the separator used to identify the end of a row.

 

Regex

This field can contain multiple lines. Type in your regular expressions including the subpattern matching the fields to be extracted.

Note: Antislashes need to be doubled in regexp

Warning

Regex syntax requires double quotes.

 

Header

Enter the number of rows to be skipped in the beginning of file.

 

Footer

Number of rows to be skipped at the end of the file.

 

Limit

Maximum number of rows to be processed. If Limit = 0, no row is read or processed.

 

Ignore error message for the unmatched record

Select this check box to avoid outputing error messages for records that do not match the specified regex. This check box is cleared by default.

 

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

 

 

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

 

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

 

Skip empty rows

Select this check box to skip the empty rows.

 

Die on error

Select this check box to stop the execution of the Job when an error occurs.

Clear the check box to skip any rows on error and complete the process for error-free rows. When errors are skipped, you can collect the rows on error using a Row > Reject link.

Advanced settings

Encoding

Select the encoding from the list or select Custom and define it manually. This field is compulsory for database data handling.

In the Map/Reduce version of tFileInputRegex, you need to select the Custom encoding check box to display this list.

 

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at a Job level as well as at each component level.

Global Variables

NB_LINE: the number of rows processed. This is an After variable and it returns an integer.

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.

Usage

Use this component to read a file and separate fields contained in this file according to the defined Regex. You can also create a rejection flow using a Row > Reject link to filter the data which doesn't correspond to the type defined. For an example of how to use these two links, see Scenario 2: Extracting correct and erroneous data from an XML field in a delimited file.

Log4j

If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Limitation

n/a

Scenario: Regex to Positional file

The following scenario creates a two-component Job, reading data from an Input file using regular expression and outputting delimited data into an XML file.

Dropping and linking the components

  1. Drop a tFileInputRegex component from the Palette to the design workspace.

  2. Drop a tFileOutputPositional component the same way.

  3. Right-click on the tFileInputRegex component and select Row > Main. Drag this main row link onto the tFileOutputPositional component and release when the plug symbol displays.

Configuring the components

  1. Select the tFileInputRegex again so the Component view shows up, and define the properties:

  2. The Job is built-in for this scenario. Hence, the Properties are set for this station only.

  3. Fill in a path to the file in File Name field. This field is mandatory.

  4. Define the Row separator identifying the end of a row.

  5. Then define the Regular expression in order to delimit fields of a row, which are to be passed on to the next component. You can type in a regular expression using Java code, and on mutiple lines if needed.

    Warning

    Regex syntax requires double quotes.

  6. In this expression, make sure you include all subpatterns matching the fields to be extracted.

  7. In this scenario, ignore the header, footer and limit fields.

  8. Select a local (Built-in) Schema to define the data to pass on to the tFileOutputPositional component.

  9. You can load or create the schema through the Edit Schema function.

  10. Then define the second component properties:

  11. Enter the Positional file output path.

  12. Enter the Encoding standard, the output file is encoded in. Note that, for the time being, the encoding consistency verification is not supported.

  13. Select the Schema type. Click on Sync columns to automatically synchronize the schema with the Input file schema.

Saving and executing the Job

  1. Press Ctrl+S to save your Job.

  2. Now go to the Run tab, and click on Run to execute the Job.

    The file is read row by row and split up into fields based on the Regular Expression definition. You can open it using any standard file editor.

tFileInputRegex in Talend Map/Reduce Jobs

Warning

The information in this section is only for users that have subscribed to one of the Talend solutions with Big Data and is not applicable to Talend Open Studio for Big Data users.

In a Talend Map/Reduce Job, tFileInputRegex, as well as the other Map/Reduce components preceding it, generates native Map/Reduce code. This section presents the specific properties of tFileInputRegex when it is used in that situation. For further information about a Talend Map/Reduce Job, see Talend Big Data Getting Started Guide.

Component family

File/Input

 

Basic settings

Property type

Either Built-In or Repository.

 

 

Built-In: No property data stored centrally.

 

 

Repository: Select the repository file where the properties are stored.

 

Folder/File

Browse to, or enter the path pointing to the data to be used in the file system.

If the path you set points to a folder, this component will read all of the files stored in that folder, for example, /user/talend/in; if sub-folders exist, the sub-folders are automatically ignored unless you define the path like /user/talend/in/*.

If you want to specify more than one files or directories in this field, separate each path using a coma (,).

If the file to be read is a compressed one, enter the file name with its extension; then ttFileInputRegex automatically decompresses it at runtime. The supported compression formats and their corresponding extensions are:

  • DEFLATE: *.deflate

  • gzip: *.gz

  • bzip2: *.bz2

  • LZO: *.lzo

Note that you need to ensure you have properly configured the connection to the Hadoop distribution to be used in the Hadoop configuration tab in the Run view.

 

Row separator

Enter the separator used to identify the end of a row.

 

Regex

This field can contain multiple lines. Type in your regular expressions including the subpattern matching the fields to be extracted.

Note: Antislashes need to be doubled in regexp

Warning

Regex syntax requires double quotes.

 

Header

Enter the number of rows to be skipped in the beginning of file.

 

Footer

Number of rows to be skipped at the end of the file.

 

Limit

Maximum number of rows to be processed. If Limit = 0, no row is read or processed.

 

Schema and Edit Schema

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

 

 

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

 

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

 

Skip empty rows

Select this check box to skip the empty rows.

 

Die on error

Select this check box to stop the execution of the Job when an error occurs.

Clear the check box to skip any rows on error and complete the process for error-free rows. When errors are skipped, you can collect the rows on error using a Row > Reject link.

Advanced settings

Encoding

Select the encoding from the list or select Custom and define it manually. This field is compulsory for database data handling.

In the Map/Reduce version of tFileInputRegex, you need to select the Custom encoding check box to display this list.

Global Variables

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.

Usage in Map/Reduce Jobs

Use this component to read a file and separate fields contained in this file according to the defined Regex. You can also create a rejection flow using a Row > Reject link to filter the data which doesn't correspond to the type defined.

In a Talend Map/Reduce Job, it is used as a start component and requires a transformation component as output link. The other components used along with it must be Map/Reduce components, too. They generate native Map/Reduce code that can be executed directly in Hadoop.

You need to use the Hadoop Configuration tab in the Run view to define the connection to a given Hadoop distribution for the whole Job.

This connection is effective on a per-Job basis.

For further information about a Talend Map/Reduce Job, see the sections describing how to create, convert and configure a Talend Map/Reduce Job of the Talend Big Data Getting Started Guide.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs, and non Map/Reduce Jobs.

Limitation

n/a

Related scenarios

No scenario is available for the Map/Reduce version of this component yet.

tFileInputRegex properties in Spark Batch Jobs

Component family

File/Input

 

Basic settings

Define a storage configuration component

Select the configuration component to be used to provide the configuration information for the connection to the target file system such as HDFS or S3.

If you leave this check box clear, the target file system is the local system.

Note that the configuration component to be used must be present in the same Job. For example, if you have dropped a tHDFSConfiguration component in the Job, you can select it to write the result in a given HDFS system.

 

Property type

Either Built-In or Repository.

 

 

Built-In: No property data stored centrally.

 

 

Repository: Select the repository file where the properties are stored.

 

Folder/File

Browse to, or enter the path pointing to the data to be used in the file system.

If the path you set points to a folder, this component will read all of the files stored in that folder, for example, /user/talend/in; if sub-folders exist, the sub-folders are automatically ignored unless you define the path like /user/talend/in/*.

If you want to specify more than one files or directories in this field, separate each path using a coma (,).

If the file to be read is a compressed one, enter the file name with its extension; then ttFileInputRegex automatically decompresses it at runtime. The supported compression formats and their corresponding extensions are:

  • DEFLATE: *.deflate

  • gzip: *.gz

  • bzip2: *.bz2

  • LZO: *.lzo

Note that you need to ensure you have properly configured the connection in the configuration component you have selected from the configuration component list.

 

Row separator

Enter the separator used to identify the end of a row.

 

Regex

This field can contain multiple lines. Type in your regular expressions including the subpattern matching the fields to be extracted.

Note: Antislashes need to be doubled in regexp

Warning

Regex syntax requires double quotes.

 

Header

Enter the number of rows to be skipped in the beginning of file.

 

Schema and Edit Schema

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

 

 

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

 

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

 

Skip empty rows

Select this check box to skip the empty rows.

 

Die on error

Select this check box to stop the execution of the Job when an error occurs.

Advanced settings

Encoding

You may encounter encoding issues when you process the stored data. In that situation, select this check box to display the Encoding list.

Select the encoding from the list or select Custom and define it manually.

Usage in Spark Batch Jobs

In a Talend Spark Batch Job, it is used as a start component and requires an output link. The other components used along with it must be Spark Batch components, too. They generate native Spark code that can be executed directly in a Spark cluster.

This component, along with the Spark Batch component Palette it belongs to, appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs.

Log4j

If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Spark Connection

You need to use the Spark Configuration tab in the Run view to define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, one and only one file system related component from the Storage family is required in the same Job so that Spark can use this component to connect to the file system to which the jar files dependent on the Job are transferred:

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Spark Batch version of this component yet.

tFileInputRegex properties in Spark Streaming Jobs

Warning

The streaming version of this component is available in the Palette of the studio on the condition that you have subscribed to Talend Real-time Big Data Platform or Talend Data Fabric.

Component family

File/Input

 

Basic settings

Define a storage configuration component

Select the configuration component to be used to provide the configuration information for the connection to the target file system such as HDFS or S3.

If you leave this check box clear, the target file system is the local system.

Note that the configuration component to be used must be present in the same Job. For example, if you have dropped a tHDFSConfiguration component in the Job, you can select it to write the result in a given HDFS system.

 

Property type

Either Built-In or Repository.

 

 

Built-In: No property data stored centrally.

 

 

Repository: Select the repository file where the properties are stored.

 

Folder/File

Browse to, or enter the path pointing to the data to be used in the file system.

If the path you set points to a folder, this component will read all of the files stored in that folder, for example, /user/talend/in; if sub-folders exist, the sub-folders are automatically ignored unless you define the path like /user/talend/in/*.

If you want to specify more than one files or directories in this field, separate each path using a coma (,).

If the file to be read is a compressed one, enter the file name with its extension; then ttFileInputRegex automatically decompresses it at runtime. The supported compression formats and their corresponding extensions are:

  • DEFLATE: *.deflate

  • gzip: *.gz

  • bzip2: *.bz2

  • LZO: *.lzo

Note that you need to ensure you have properly configured the connection in the configuration component you have selected from the configuration component list.

 

Row separator

Enter the separator used to identify the end of a row.

 

Regex

This field can contain multiple lines. Type in your regular expressions including the subpattern matching the fields to be extracted.

Note: Antislashes need to be doubled in regexp

Warning

Regex syntax requires double quotes.

 

Header

Enter the number of rows to be skipped in the beginning of file.

 

Schema and Edit Schema

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

 

 

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

 

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

 

Skip empty rows

Select this check box to skip the empty rows.

 

Die on error

Select this check box to stop the execution of the Job when an error occurs.

Advanced settings

Encoding

You may encounter encoding issues when you process the stored data. In that situation, select this check box to display the Encoding list.

Select the encoding from the list or select Custom and define it manually.

Usage in Spark Streaming Jobs

In a Talend Spark Streaming Job, it is used as a start component and requires an output link. The other components used along with it must be Spark Streaming components, too. They generate native Spark code that can be executed directly in a Spark cluster.

This component is only used to provide the lookup flow (the right side of a join operation) to the main flow of a tMap component. In this situation, the lookup model used by this tMap must be Load once.

This component, along with the Spark Streaming component Palette it belongs to, appears only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs.

Log4j

If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Spark Connection

You need to use the Spark Configuration tab in the Run view to define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, one and only one file system related component from the Storage family is required in the same Job so that Spark can use this component to connect to the file system to which the jar files dependent on the Job are transferred:

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Spark Streaming version of this component yet.