tFileInputPositional Standard properties - Cloud - 8.0

Positional

Version
Cloud
8.0
Language
English
Product
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Real-Time Big Data Platform
Module
Talend Studio
Content
Data Governance > Third-party systems > File components (Integration) > Positional components
Data Quality and Preparation > Third-party systems > File components (Integration) > Positional components
Design and Development > Third-party systems > File components (Integration) > Positional components
Last publication date
2024-02-20

These properties are used to configure tFileInputPositional running in the Standard Job framework.

The Standard tFileInputPositional component belongs to the File family.

The component in this framework is available in all Talend products.

Basic settings

Property type

Either Built-In or Repository.

 

Built-In: No property data stored centrally.

 

Repository: Select the repository file where the properties are stored.

Use existing dynamic

Select this check box to reuse an existing dynamic schema to handle data from unknown columns.

When this check box is selected, a Component list appears allowing you to select the component used to set the dynamic schema.

File name/Stream

File name: Name and path of the file to be processed.

Warning: Use absolute path (instead of relative path) for this field to avoid possible errors.

Stream: The data flow to be processed. The data must be added to the flow in order for tFileInputPositional to fetch these data via the corresponding representative variable.

This variable can be already pre-defined in Talend Studio or provided by the context or the components you are using along with this component, for example, the INPUT_STREAM variable of tFileFetch; otherwise, you can define it manually and use it according to the design of your Job, for example, using tJava or tJavaFlex.

In order to avoid the inconvenience of hand writing, you could select the variable of interest from the auto-completion list (Ctrl+Space) to fill the current field on condition that this variable has been properly defined.

Related scenario to the input stream, see Reading data from a remote file in streaming mode.

Row separator

The separator used to identify the end of a row.

Use byte length as the cardinality

Select this check box to enable the support of double-byte character to this component. JDK 1.6 is required for this feature.

Customize

Select this check box to customize the data format of the positional file and define the table columns:

Column: Select the column you want to customize.

Size: Enter the column size.

Padding char: Enter, between double quotation marks, the padding charater you need to remove from the field. A space by default.

Alignment: Select the appropriate alignment parameter.

Pattern

Length values separated by commas, interpreted as a string between quotes. Make sure the values entered in this field are consistent with the schema defined.

Pattern Units

The unit of the length values specified in the Pattern field.

  • Bytes: With this option selected, the length values in the Pattern field should be the count of bytes that represent symbols in original encoding of the input file.

  • Symbols: With this option selected, the length values in the Pattern field should be the count of regular symbols, not including surrogate pairs.

  • Symbols (including rare): With this option selected, the length values in the Pattern field should be the count of symbols, including rare symbols such as surrogate pairs, and each surrogate pair counts as a single symbol. Considering the performance factor, it is not recommended to use this option when your input data consists of only regular symbols.

Skip empty rows

Select this check box to skip the empty rows.

Uncompress as zip file

Select this check box to uncompress the input file.

Die on error

Select the check box to stop the execution of the Job when an error occurs.

Clear the check box to skip any rows on error and complete the process for error-free rows. When errors are skipped, you can collect the rows on error using a Row > Reject link.

Header

Enter the number of rows to be skipped in the beginning of file.

Footer

Number of rows to be skipped at the end of the file.

Limit

Maximum number of rows to be processed. If Limit = 0, no row is read or processed.

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion.

    If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the Repository Content window.

This component offers the advantage of the dynamic schema feature. This allows you to retrieve unknown columns from source files or to copy batches of columns from a source without mapping each column individually. For further information about dynamic schemas, see Dynamic schema.

This dynamic schema feature is designed for the purpose of retrieving unknown columns of a table and is recommended to be used for this purpose only; it is not recommended for the use of creating tables.

This component must work with tSetDynamicSchema to leverage the dynamic schema feature.

 

Built-in: The schema will be created and stored locally for this component only. For more information about a component schema in its Basic settings tab, see Basic settings tab.

 

Repository: The schema already exists and is stored in the Repository, hence can be reused in various projects and Job flowcharts. For more information about a component schema in its Basic settings tab, see Basic settings tab.

Advanced settings

Needed to process rows longer than 100 000 characters

Select this check box if the rows to be processed in the input file are longer than 100 000 characters.

Advanced separator (for numbers)

Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.).

Thousands separator: define separators for thousands.

Decimal separator: define separators for decimals.

Trim all column

Select this check box to remove leading and trailing whitespaces from defined columns.

Validate date

Select this check box to check the date format strictly against the input schema.

Encoding

Select the encoding from the list or select Custom and define it manually. This field is compulsory for database data handling. The supported encodings depend on the JVM that you are using. For more information, see https://docs.oracle.com.

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at a Job level as well as at each component level.

Global Variables

Global Variables

NB_LINE: the number of rows processed. This is an After variable and it returns an integer.

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl+Space to access the variable list and choose the variable to use from it.

For more information about variables, see Using contexts and variables.

Usage

Usage rule

Use this component to read a file and separate fields using a position separator value. You can also create a rejection flow using a Row > Reject link to filter the data which does not correspond to the type defined. For an example of how to use these two links, see Procedure.