tFileOutputParquet Standard properties - Cloud - 8.0

Parquet

Version
Cloud
8.0
Language
English
Product
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Real-Time Big Data Platform
Module
Talend Studio
Content
Data Governance > Third-party systems > File components (Integration) > Parquet components
Data Quality and Preparation > Third-party systems > File components (Integration) > Parquet components
Design and Development > Third-party systems > File components (Integration) > Parquet components
Last publication date
2024-02-29

These properties are used to configure tFileOutputParquet running in the Standard Job framework.

The Standard tFileOutputParquet component belongs to the File family.

Note: If you are using a Windows platform, make sure Hadoop Winutils and Microsoft Visual C++ 2010 Service Pack 1 Redistributable Package MFC Security Update are installed before using this component. The two links also give the installation information.

The component in this framework is available in all subscription-based Talend products.

Basic settings

Schema and Edit schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields.

  • Built-In: You create and store the schema locally for this component only.

  • Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs.

Click Edit schema to make changes to the schema. If you make changes, the schema automatically becomes built-in.

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion.

    If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the Repository Content window.

This component offers the advantage of the dynamic schema feature. This allows you to retrieve unknown columns from source files or to copy batches of columns from a source without mapping each column individually. For further information about dynamic schemas, see Dynamic schema.

This dynamic schema feature is designed for the purpose of retrieving unknown columns of a table and is recommended to be used for this purpose only; it is not recommended for the use of creating tables.

File name

Name or path to the output file and/or the variable to be used.

For further information about how to define and use a variable in a Job, see Using contexts and variables.

Warning: Use absolute path (instead of relative path) for this field to avoid possible errors.

Action

Select an operation for writing data:

Create: Creates a file and write data in it.

Overwrite: Overwrites the file if it already exists..

Compression

By default, the Uncompressed option is active. But you can select the Gzip or the Snappy option to compress the output data.

Use external Hadoop dependencies

Select this check box to use external Hadoop dependencies and enter the path in the File name field respecting the following format: "file:///path/out.parquet".

Advanced settings

Row group size (in bytes)

Set the row group size in bytes.

Page size (in bytes)

Set the page size in bytes.

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at a Job level as well as at each component level.

Global Variables

Global Variables

NB_LINE: the number of rows read by an input component or transferred to an output component. This is an After variable and it returns an integer.

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

FILE_PATH: the path pointing to the folder or the file being processed on Box. This is a Flow variable and it returns a string.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl+Space to access the variable list and choose the variable to use from it.

For more information about variables, see Using contexts and variables.

Usage

Usage rule

This component is used as an end component and requires an input link.