tFileOutputPositional Standard properties - Cloud - 8.0

Positional

Version
Cloud
8.0
Language
English
Product
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Real-Time Big Data Platform
Module
Talend Studio
Content
Data Governance > Third-party systems > File components (Integration) > Positional components
Data Quality and Preparation > Third-party systems > File components (Integration) > Positional components
Design and Development > Third-party systems > File components (Integration) > Positional components
Last publication date
2024-02-20

These properties are used to configure tFileOutputPositional running in the Standard Job framework.

The Standard tFileOutputPositional component belongs to the File family.

The component in this framework is available in all Talend products.

Basic settings

Property type

Either Built-In or Repository.

 

Built-In: No property data stored centrally.

 

Repository: Select the repository file where the properties are stored.

Use existing dynamic

Select this check box to reuse an existing dynamic schema to handle data from unknown columns.

When this check box is selected, a Component list appears allowing you to select the component used to set the dynamic schema.

Use Output Stream

Select this check box process the data flow of interest. Once you have selected it, the Output Stream field displays and you can type in the data flow of interest.

The data flow to be processed must be added to the flow in order for this component to fetch these data via the corresponding representative variable.

This variable can be already pre-defined in Talend Studio or provided by the context or the components you are using along with this component; otherwise, you can define it manually and use it according to the design of your Job, for example, using tJava or tJavaFlex.

In order to avoid the inconvenience of hand writing, you could select the variable of interest from the auto-completion list (Ctrl+Space) to fill the current field on condition that this variable has been properly defined.

For further information about how to use a stream, see Reading data from a remote file in streaming mode.

File Name

Name or path to the file to be processed and or the variable to be used.

This field becomes unavailable once you have selected the Use Output Stream check box.

For further information about how to define and use a variable in a Job, see Using contexts and variables.

Warning: Use absolute path (instead of relative path) for this field to avoid possible errors.

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion.

    If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the Repository Content window.

This component offers the advantage of the dynamic schema feature. This allows you to retrieve unknown columns from source files or to copy batches of columns from a source without mapping each column individually. For further information about dynamic schemas, see Dynamic schema.

This dynamic schema feature is designed for the purpose of retrieving unknown columns of a table and is recommended to be used for this purpose only; it is not recommended for the use of creating tables.

 

Built-In: You create and store the schema locally for this component only.

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs.

Row separator

The separator used to identify the end of a row.

Append

Select this check box to add the new rows at the end of the file.

Include header

Select this check box to include the column header to the file.

Compress as zip file

Select this check box to compress the output file in zip format.

Formats

Customize the positional file data format and fill in the columns in the Formats table.

Column: Select the column you want to customize.

Size: Enter the column size.

Padding char: Type in between quotes the padding characters used. A space by default.

Alignment: Select the appropriate alignment parameter.

Keep: If the data in the column or in the field are too long, select the part you want to keep.

Advanced settings

Advanced separator (for numbers)

Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.).

Thousands separator: define separators for thousands.

Decimal separator: define separators for decimals.

Use byte length as the cardinality

Select this check box to add support of double-byte character to this component. JDK 1.6 is required for this feature.

Create directory if not exists

This check box is selected by default. It creates a directory to hold the output table if it does not exist.

Custom the flush buffer size

Select this check box to define the number of lines to write before emptying the buffer.

Row Number: set the number of lines to write.

Output in row mode

Writes in row mode.

Encoding

Select the encoding from the list or select Custom and define it manually. This field is compulsory for database data handling. The supported encodings depend on the JVM that you are using. For more information, see https://docs.oracle.com.

Don't generate empty file

Select this check box if you do not want to generate empty files.

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at a Job level as well as at each component level.

Global Variables

Global Variables

NB_LINE: the number of rows read by an input component or transferred to an output component. This is an After variable and it returns an integer.

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl+Space to access the variable list and choose the variable to use from it.

For more information about variables, see Using contexts and variables.

Usage

Usage rule

Use this component to read a file and separate the fields using the specified separator.

Dynamic settings

Click the [+] button to add a row in the table and fill the Code field with a context variable to choose your HDFS connection dynamically from multiple connections planned in your Job. This feature is useful when you need to access files in different HDFS systems or different distributions, especially when you are working in an environment where you cannot change your Job settings, for example, when your Job has to be deployed and executed independent of Talend Studio .

The Dynamic settings table is available only when the Use an existing connection check box is selected in the Basic settings view. Once a dynamic parameter is defined, the Component List box in the Basic settings view becomes unusable.

For examples on using dynamic parameters, see Reading data from databases through context-based dynamic connections and Reading data from different MySQL databases using dynamically loaded connection parameters. For more information on Dynamic settings and context variables, see Dynamic schema and Creating a context group and define context variables in it.