tFileInputPositional - 6.3

Talend Open Studio for Big Data Components Reference Guide

EnrichVersion
6.3
EnrichProdName
Talend Open Studio for Big Data
task
Data Governance
Data Quality and Preparation
Design and Development
EnrichPlatform
Talend Studio

Function

tFileInputPositional reads a given file row by row and extracts fields based on a pattern.

Purpose

This component opens a file and reads it row by row to split them up into fields then sends fields as defined in the schema to the next Job component, via a Row link.

tFileInputPositional properties

Component family

File/Input

 

Basic settings

Property type

Either Built-In or Repository.

 

 

Built-In: No property data stored centrally.

 

 

Repository: Select the repository file where the properties are stored.

 

File Name/Stream

File name: Name and path of the file to be processed.

Stream: The data flow to be processed. The data must be added to the flow in order for tFileInputPositional to fetch these data via the corresponding representative variable.

This variable could be already pre-defined in your Studio or provided by the context or the components you are using along with this component, for example, the INPUT_STREAM variable of tFileFetch; otherwise, you could define it manually and use it according to the design of your Job, for example, using tJava or tJavaFlex.

In order to avoid the inconvenience of hand writing, you could select the variable of interest from the auto-completion list (Ctrl+Space) to fill the current field on condition that this variable has been properly defined.

Related topic to the available variables: see Talend Studio User Guide.

Related scenario to the input stream, see Scenario 2: Reading data from a remote file in streaming mode.

 

Row separator

Enter the separator used to identify the end of a row.

 

Use byte length as the cardinality

Select this check box to enable the support of double-byte character to this component. JDK 1.6 is required for this feature.

 

Customize

Select this check box to customize the data format of the positional file and define the table columns:

Column: Select the column you want to customize.

Size: Enter the column size.

Padding char: Enter, between double quotation marks, the padding charater you need to remove from the field. A space by default.

Alignment: Select the appropriate alignment parameter.

 

Pattern

Length values separated by commas, interpreted as a string between quotes. Make sure the values entered in this field are consistent with the schema defined.

 

Skip empty rows

Select this check box to skip the empty rows.

 

Uncompress as zip file

Select this check box to uncompress the input file.

 

Die on error

Select this check box to stop the execution of the Job when an error occurs.

Clear the check box to skip any rows on error and complete the process for error-free rows. When errors are skipped, you can collect the rows on error using a Row > Reject link.

 

Header

Enter the number of rows to be skipped in the beginning of file.

 

Footer

Number of rows to be skipped at the end of the file.

 

Limit

Maximum number of rows to be processed. If Limit = 0, no row is read or processed.

 

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

This component must work with tSetDynamicSchema to leverage the dynamic schema feature.

 

 

Built-in: The schema will be created and stored locally for this component only. Related topic: see Talend Studio User Guide.

 

 

Repository: The schema already exists and is stored in the Repository, hence can be reused in various projects and Job flowcharts. Related topic: see Talend Studio User Guide.

Advanced settings

Needed to process rows longer than 100 000 characters

Select this check box if the rows to be processed in the input file are longer than 100 000 characters.

 

Advanced separator (for numbers)

Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.).

Thousands separator: define separators for thousands.

Decimal separator: define separators for decimals.

 

Trim all column

Select this check box to remove leading and trailing whitespaces from defined columns.

 

Validate date

Select this check box to check the date format strictly against the input schema.

 

Encoding

Select the encoding from the list or select Custom and define it manually. This field is compulsory for database data handling.

 

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at a Job level as well as at each component level.

Global Variables

NB_LINE: the number of rows processed. This is an After variable and it returns an integer.

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.

Usage

Use this component to read a file and separate fields using a position separator value. You can also create a rejection flow using a Row > Reject link to filter the data which does not correspond to the type defined. For an example of how to use these two links, see Scenario 2: Extracting correct and erroneous data from an XML field in a delimited file.

Log4j

If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Scenario: From Positional to XML file

The following scenario describes a two-component Job, which aims at reading data from an input file that contains contract numbers, customer references, and insurance numbers as shown below, and outputting the selected data (according to the data position) into an XML file.

Contract       CustomerRef    InsuranceNr
00001          8200           50330      
00001          8201           50331      
00002          8202           50332      
00002          8203           50333      

Dropping and linking components

  1. Drop a tFileInputPositional component from the Palette to the design workspace.

  2. Drop a tFileOutputXML component as well. This file is meant to receive the references in a structured way.

  3. Right-click the tFileInputPositional component and select Row > Main. Then drag it onto the tFileOutputXML component and release when the plug symbol shows up.

Configuring data input

  1. Double-click the tFileInputPositional component to show its Basic settings view and define its properties.

  2. Define the Job Property type if needed. For this scenario, we use the built-in Property type.

    As opposed to the Repository, this means that the Property type is set for this station only.

  3. Fill in a path to the input file in the File Name field. This field is mandatory.

  4. Define the Row separator identifying the end of a row if needed, by default, a carriage return.

  5. If required, select the Use byte length as the cardinality check box to enable the support of double-byte character.

  6. Define the Pattern to delimit fields in a row. The pattern is a series of length values corresponding to the values of your input files. The values should be entered between quotes, and separated by a comma. Make sure the values you enter match the schema defined.

  7. Fill in the Header, Footer and Limit fields according to your input file structure and your need. In this scenario, we only need to skip the first row when reading the input file. To do this, fill the Header field with 1 and leave the other fields as they are.

  8. Next to Schema, select Repository if the input schema is stored in the Repository. In this use case, we use a Built-In input schema to define the data to pass on to the tFileOutputXML component.

  9. You can load and/or edit the schema via the Edit Schema function. For this schema, define three columns, respectively Contract, CustomerRef and InsuranceNr matching the structure of the input file. Then, click OK to close the [Schema] dialog box and propagate the changes.

Configuring data output

  1. Double-click tFileOutputXML to show its Basic settings view.

  2. Enter the XML output file path.

  3. Define the row tag that will wrap each row of data, in this use case ContractRef.

  4. Click the three-dot button next to Edit schema to view the data structure, and click Sync columns to retrieve the data structure from the input component if needed.

  5. Switch to the Advanced settings tab view to define other settings for the XML output.

  6. Click the plus button to add a line in the Root tags table, and enter a root tag (or more) to wrap the XML output structure, in this case ContractsList.

  7. Define parameters in the Output format table if needed. For example, select the As attribute check box for a column if you want to use its name and value as an attribute for the parent XML element, clear the Use schema column name check box for a column to reuse the column label from the input schema as the tag label. In this use case, we keep all the default output format settings as they are.

  8. To group output rows according to the contract number, select the Use dynamic grouping check box, add a line in the Group by table, select Contract from the Column list field, and enter an attribute for it in the Attribute label field.

  9. Leave all the other parameters as they are.

Saving and executing the Job

  1. Press Ctrl+S to save your Job to ensure that all the configured parameters take effect.

  2. Press F6 or click Run on the Run tab to execute the Job.

    The file is read row by row based on the length values defined in the Pattern field and output as an XML file as defined in the output settings. You can open it using any standard XML editor.