tFileInputRegex Standard properties - Cloud - 8.0

Regex

Version
Cloud
8.0
Language
English
Product
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Real-Time Big Data Platform
Module
Talend Studio
Content
Data Governance > Third-party systems > File components (Integration) > Regex components
Data Quality and Preparation > Third-party systems > File components (Integration) > Regex components
Design and Development > Third-party systems > File components (Integration) > Regex components
Last publication date
2024-02-20

These properties are used to configure tFileInputRegex running in the Standard Job framework.

The Standard tFileInputRegex component belongs to the File family.

The component in this framework is available in all Talend products.

Basic settings

Property type

Either Built-In or Repository.

 

Built-In: No property data stored centrally.

 

Repository: Select the repository file where the properties are stored.

File name/Stream

File name: Name of the file and/or the variable to be processed.

Warning: Use absolute path (instead of relative path) for this field to avoid possible errors.

Stream: Data flow to be processed. The data must be added to the flow so that it can be collected by the tFileInputRegex via the INPUT_STREAM variable in the autocompletion list (Ctrl+Space).

For further information about how to define and use a variable in a Job, see Using contexts and variables.

Row separator

The separator used to identify the end of a row.

Regex

Type in your Java regular expression including the subpattern matching the fields to be extracted. This field can contain multiple lines.

Note: Antislashes need to be doubled in regexp

Warning:
  • The regular expression needs to be in double quotes.
  • To extract all the desired strings, make sure the regular expression contains the corresponding subpatterns that match the strings. Also, each subpattern in the regular expression needs to be in a pair of brackets.

Header

Enter the number of rows to be skipped in the beginning of file.

Footer

Number of rows to be skipped at the end of the file.

Limit

Maximum number of rows to be processed. If Limit = 0, no row is read or processed.

Ignore error message for the unmatched record

Select this check box to avoid outputing error messages for records that do not match the specified regex. This check box is cleared by default.

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion.

    If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the Repository Content window.

 

Built-In: You create and store the schema locally for this component only.

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs.

Skip empty rows

Select this check box to skip the empty rows.

Die on error

Select the check box to stop the execution of the Job when an error occurs.

Clear the check box to skip any rows on error and complete the process for error-free rows. When errors are skipped, you can collect the rows on error using a Row > Reject link.

Advanced settings

Encoding

Select the encoding from the list or select Custom and define it manually. This field is compulsory for database data handling. The supported encodings depend on the JVM that you are using. For more information, see https://docs.oracle.com.

In the Map/Reduce version of tFileInputRegex, you need to select the Custom encoding check box to display this list.

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at a Job level as well as at each component level.

Global Variables

Global Variables

NB_LINE: the number of rows processed. This is an After variable and it returns an integer.

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl+Space to access the variable list and choose the variable to use from it.

For more information about variables, see Using contexts and variables.

Usage

Usage rule

Use this component to read a file and separate fields contained in this file according to the defined Regex. You can also create a rejection flow using a Row > Reject link to filter the data which doesn't correspond to the type defined. For an example of how to use these two links, see Procedure.