tFileInputPositional - 6.1

Talend Components Reference Guide

EnrichVersion
6.1
EnrichProdName
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for Data Quality
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Real-Time Big Data Platform
task
Data Governance
Data Quality and Preparation
Design and Development
EnrichPlatform
Talend Studio

Function

tFileInputPositional reads a given file row by row and extracts fields based on a pattern.

Purpose

This component opens a file and reads it row by row to split them up into fields then sends fields as defined in the schema to the next Job component, via a Row link.

If you have subscribed to one of the Talend solutions with Big Data, this component is available in the following types of Jobs:

tFileInputPositional properties

Component family

File/Input

 

Basic settings

Property type

Either Built-In or Repository.

 

 

Built-In: No property data stored centrally.

 

 

Repository: Select the repository file where the properties are stored.

  Use existing dynamic

Select this check box to reuse an existing dynamic schema to handle data from unknown columns.

When this check box is selected, a Component list appears allowing you to select the component used to set the dynamic schema.

 

File Name/Stream

File name: Name and path of the file to be processed.

Stream: The data flow to be processed. The data must be added to the flow in order for tFileInputPositional to fetch these data via the corresponding representative variable.

This variable could be already pre-defined in your Studio or provided by the context or the components you are using along with this component, for example, the INPUT_STREAM variable of tFileFetch; otherwise, you could define it manually and use it according to the design of your Job, for example, using tJava or tJavaFlex.

In order to avoid the inconvenience of hand writing, you could select the variable of interest from the auto-completion list (Ctrl+Space) to fill the current field on condition that this variable has been properly defined.

Related topic to the available variables: see Talend Studio User Guide.

Related scenario to the input stream, see Scenario 2: Reading data from a remote file in streaming mode.

 

Row separator

Enter the separator used to identify the end of a row.

 

Use byte length as the cardinality

Select this check box to enable the support of double-byte character to this component. JDK 1.6 is required for this feature.

 

Customize

Select this check box to customize the data format of the positional file and define the table columns:

Column: Select the column you want to customize.

Size: Enter the column size.

Padding char: Enter, between double quotation marks, the padding charater you need to remove from the field. A space by default.

Alignment: Select the appropriate alignment parameter.

 

Pattern

Length values separated by commas, interpreted as a string between quotes. Make sure the values entered in this field are consistent with the schema defined.

 

Skip empty rows

Select this check box to skip the empty rows.

 

Uncompress as zip file

Select this check box to uncompress the input file.

 

Die on error

Select this check box to stop the execution of the Job when an error occurs.

Clear the check box to skip any rows on error and complete the process for error-free rows. When errors are skipped, you can collect the rows on error using a Row > Reject link.

 

Header

Enter the number of rows to be skipped in the beginning of file.

 

Footer

Number of rows to be skipped at the end of the file.

 

Limit

Maximum number of rows to be processed. If Limit = 0, no row is read or processed.

 

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

This component offers the advantage of the dynamic schema feature. This allows you to retrieve unknown columns from source files or to copy batches of columns from a source without mapping each column individually. For further information about dynamic schemas, see Talend Studio User Guide.

This dynamic schema feature is designed for the purpose of retrieving unknown columns of a table and is recommended to be used for this purpose only; it is not recommended for the use of creating tables.

This component must work with tSetDynamicSchema to leverage the dynamic schema feature.

 

 

Built-in: The schema will be created and stored locally for this component only. Related topic: see Talend Studio User Guide.

 

 

Repository: The schema already exists and is stored in the Repository, hence can be reused in various projects and Job flowcharts. Related topic: see Talend Studio User Guide.

Advanced settings

Needed to process rows longer than 100 000 characters

Select this check box if the rows to be processed in the input file are longer than 100 000 characters.

 

Advanced separator (for numbers)

Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.).

Thousands separator: define separators for thousands.

Decimal separator: define separators for decimals.

 

Trim all column

Select this check box to remove leading and trailing whitespaces from defined columns.

 

Validate date

Select this check box to check the date format strictly against the input schema.

 

Encoding

Select the encoding from the list or select Custom and define it manually. This field is compulsory for database data handling.

 

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at a Job level as well as at each component level.

Global Variables

NB_LINE: the number of rows processed. This is an After variable and it returns an integer.

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.

Usage

Use this component to read a file and separate fields using a position separator value. You can also create a rejection flow using a Row > Reject link to filter the data which does not correspond to the type defined. For an example of how to use these two links, see Scenario 2: Extracting correct and erroneous data from an XML field in a delimited file.

Log4j

If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Scenario 1: From Positional to XML file

The following scenario describes a two-component Job, which aims at reading data from an input file that contains contract numbers, customer references, and insurance numbers as shown below, and outputting the selected data (according to the data position) into an XML file.

Contract       CustomerRef    InsuranceNr
00001          8200           50330      
00001          8201           50331      
00002          8202           50332      
00002          8203           50333      

Dropping and linking components

  1. Drop a tFileInputPositional component from the Palette to the design workspace.

  2. Drop a tFileOutputXML component as well. This file is meant to receive the references in a structured way.

  3. Right-click the tFileInputPositional component and select Row > Main. Then drag it onto the tFileOutputXML component and release when the plug symbol shows up.

Configuring data input

  1. Double-click the tFileInputPositional component to show its Basic settings view and define its properties.

  2. Define the Job Property type if needed. For this scenario, we use the built-in Property type.

    As opposed to the Repository, this means that the Property type is set for this station only.

  3. Fill in a path to the input file in the File Name field. This field is mandatory.

  4. Define the Row separator identifying the end of a row if needed, by default, a carriage return.

  5. If required, select the Use byte length as the cardinality check box to enable the support of double-byte character.

  6. Define the Pattern to delimit fields in a row. The pattern is a series of length values corresponding to the values of your input files. The values should be entered between quotes, and separated by a comma. Make sure the values you enter match the schema defined.

  7. Fill in the Header, Footer and Limit fields according to your input file structure and your need. In this scenario, we only need to skip the first row when reading the input file. To do this, fill the Header field with 1 and leave the other fields as they are.

  8. Next to Schema, select Repository if the input schema is stored in the Repository. In this use case, we use a Built-In input schema to define the data to pass on to the tFileOutputXML component.

  9. You can load and/or edit the schema via the Edit Schema function. For this schema, define three columns, respectively Contract, CustomerRef and InsuranceNr matching the structure of the input file. Then, click OK to close the [Schema] dialog box and propagate the changes.

Configuring data output

  1. Double-click tFileOutputXML to show its Basic settings view.

  2. Enter the XML output file path.

  3. Define the row tag that will wrap each row of data, in this use case ContractRef.

  4. Click the three-dot button next to Edit schema to view the data structure, and click Sync columns to retrieve the data structure from the input component if needed.

  5. Switch to the Advanced settings tab view to define other settings for the XML output.

  6. Click the plus button to add a line in the Root tags table, and enter a root tag (or more) to wrap the XML output structure, in this case ContractsList.

  7. Define parameters in the Output format table if needed. For example, select the As attribute check box for a column if you want to use its name and value as an attribute for the parent XML element, clear the Use schema column name check box for a column to reuse the column label from the input schema as the tag label. In this use case, we keep all the default output format settings as they are.

  8. To group output rows according to the contract number, select the Use dynamic grouping check box, add a line in the Group by table, select Contract from the Column list field, and enter an attribute for it in the Attribute label field.

  9. Leave all the other parameters as they are.

Saving and executing the Job

  1. Press Ctrl+S to save your Job to ensure that all the configured parameters take effect.

  2. Press F6 or click Run on the Run tab to execute the Job.

    The file is read row by row based on the length values defined in the Pattern field and output as an XML file as defined in the output settings. You can open it using any standard XML editor.

Scenario 2: Handling a positional file based on a dynamic schema

This scenario describes a four-component Job that reads data from a positional file, writes the data to another positional file, and replaces the padding characters with space. The schema column details are not defined in the positional file components; instead, they leverages a reusable dynamic schema. The input file used in this scenario is as follows:

id----name--------city--------
1-----Andrews-----Paris-------
2-----Mark--------London------
3-----Marie-------Paris-------
4-----Michael-----Washington--

Dropping and linking components

  1. Drop the following components from the Palette onto the design workspace: tFixedFlowInput, tSetDynamicSchema, tFileInputPositional, and tFileOutputPositional.

  2. Connect the tFixedFlowInput component to the tSetDynamicSchema using a Row > Main connection to form a subjob. This subjob will define a reusable dynamic schema.

  3. Connect the tFileInputPositional component to the tFileOutputPositional component using a Row > Main connection to form another subjob. This subjob will read data from the input positional file and write the data to another positional file based on the dynamic schema set in the previous subjob.

  4. Connect the tFixedFlowInput component to the tFileInputPositional component using a Trigger > On Subjob Ok connection to link the two subjobs together.

Configuring the first subjob: creating a dynamic schema

  1. Double-click the tFixedFlowInput component to show its Basic settings view and define its properties.

  2. Click the [...] button next to Edit schema to open the [Schema] dialog box.

  3. Click the [+] button to add three columns: ColumnName, ColumnType, and ColumnLength, and set their types to String, String, and Integer respectively to define the minimum properties required for a positional file schema. Then, click OK to close the dialog box.

  4. Select the Use Inline Table option, click the [+] button three times to add three lines, give them a name in the ColumnName field, according to the actual columns of the input file to read: ID, Name, and City, set their types in the corresponding ColumnType field: id_Interger for column ID, and id_String for columns Name and City, and set the length values of the columns in the corresponding ColumnLength field. Note that the column names you give in this table will compose the header of the output file.

  5. Double-click the tSetDynamicSchema component to open its Basic settings view.

  6. Click Sync columns to ensure that the schema structure is properly retrieved from the preceding component.

  7. Under the Parameters table, click the [+] button to add three lines in the table.

  8. Click in the Property field for each line, and select ColumnName, Type, and Length respectively.

  9. Click in the Value field for each line, and select ColumnName, ColumnType, and ColumnLength respectively.

    Now, with the values set in the inline table of the tFixedFlowInput component retrieved, the following data structure is defined in the dynamic schema:

    Column NameTypeLength
    IDInteger6
    NameString12
    CityString12

Configuring the second subjob: reading and writing positional data

  1. Double-click the tFileInputPositional component to open its Basic settings view.

    Warning

    The dynamic schema feature is only supported in Built-In mode and requires the input file to have a header row.

  2. Select the Use existing dynamic check box, and in from the Component List that appears, select the tSetDynamicSchema component you use to create the dynamic schema. In this use case, only one tSetDynamicSchema component is used, so it is automatically selected.

  3. In the File name/Stream field, enter the path to the input positional file, or browse to the file path by clicking the [...] button.

  4. Fill in the Header, Footer and Limit fields according to your input file structure and your need. In this scenario, we only need to skip the first row when reading the input file. To do this, fill the Header field with 1 and leave the other fields as they are.

  5. Click the [...] button next to Edit schema to open the [Schema] dialog box, define only one column, dyn in this example, and select Dynamic from the Type list. Then, click OK to close the [Schema] dialog box and propagate the changes.

  6. Select the Customize check box, enter '-' in the Padding char field, and keep the other settings as they are.

  7. Double-click the tFileOutputPositional component to open its Basic settings view.

  8. Select the Use existing dynamic check box, specify the output file path, and select the Include header check box.

  9. In the Padding char field, enter ' ' so that the padding characters will be replaced with space in the output file.

Saving and executing the Job

  1. Press Ctrl+S to save your Job to ensure that all the configured parameters take effect.

  2. Press F6 or click Run on the Run tab to execute the Job.

    The data is read from the input positional file and written into the output positional file, with the padding characters replaced by space.

tFileInputPositional in Talend Map/Reduce Jobs

Warning

The information in this section is only for users that have subscribed to one of the Talend solutions with Big Data and is not applicable to Talend Open Studio for Big Data users.

In a Talend Map/Reduce Job, tFileInputPositional, as well as the whole Map/Reduce Job using it, generates native Map/Reduce code. This section presents the specific properties of tFileInputPositional when it is used in that situation. For further information about a Talend Map/Reduce Job, see the Talend Big Data Getting Started Guide.

Component family

MapReduce / Input

 

Basic settings

Property type

Either Built-In or Repository.

  

Built-In: No property data stored centrally.

  

Repository: Select the repository file where the properties are stored.

The properties are stored centrally under the Hadoop Cluster node of the Repository tree.

The fields that come after are pre-filled in using the fetched data.

For further information about the Hadoop Cluster node, see the Getting Started Guide.

 

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema. Note that if you make changes, the schema automatically becomes built-in.

  

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

  

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

 

Folder/File

Browse to, or enter the path pointing to the data to be used in the file system.

If the path you set points to a folder, this component will read all of the files stored in that folder, for example, /user/talend/in; if sub-folders exist, the sub-folders are automatically ignored unless you define the path like /user/talend/in/*.

If you want to specify more than one files or directories in this field, separate each path using a comma (,).

If the file to be read is a compressed one, enter the file name with its extension; then tFileInputPositional automatically decompresses it at runtime. The supported compression formats and their corresponding extensions are:

  • DEFLATE: *.deflate

  • gzip: *.gz

  • bzip2: *.bz2

  • LZO: *.lzo

Note that you need to ensure you have properly configured the connection to the Hadoop distribution to be used in the Hadoop configuration tab in the Run view.

 

Die on error

Select this check box to stop the execution of the Job when an error occurs.

Clear the check box to skip any rows on error and complete the process for error-free rows. When errors are skipped, you can collect the rows on error using a Row > Reject link.

 

Row separator

Enter the separator used to identify the end of a row.

 

Customize

Select this check box to customize the data format of the positional file and define the table columns:

Column: Select the column you want to customize.

Size: Enter the column size.

Padding char: Enter, between double quotation marks, the padding charater you need to remove from the field. A space by default.

Alignment: Select the appropriate alignment parameter.

 

Pattern

Enter between double quotes the length values separated by commas, interpreted as a string. Make sure the values entered in this field are consistent with the schema defined.

 

Header

Enter the number of rows to be skipped in the beginning of file.

For example, enter 0 to ignore no rows for the data without header and set 1 for the data with header at the first row.

 

Skip empty rows

Select this check box to skip the empty rows.

Advanced settings

Custom Encoding

You may encounter encoding issues when you process the stored data. In that situation, select this check box to display the Encoding list.

Then select the encoding to be used from the list or select Custom and define it manually.

 

Advanced separator (for number)

Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.).

 

Trim columns

Select this check box to remove the leading and trailing whitespaces from all columns. When this check box is cleared, the Check column to trim table is displayed, which lets you select particular columns to trim.

Global Variables

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.

Usage in Map/Reduce Jobs

In a Talend Map/Reduce Job, it is used as a start component and requires a transformation component as output link. The other components used along with it must be Map/Reduce components, too. They generate native Map/Reduce code that can be executed directly in Hadoop.

Once a Map/Reduce Job is opened in the workspace, tFileInputDelimited as well as the MapReduce family appears in the Palette of the Studio.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs, and non Map/Reduce Jobs.

Hadoop Connection

You need to use the Hadoop Configuration tab in the Run view to define the connection to a given Hadoop distribution for the whole Job.

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Map/Reduce version of this component yet.

tFileInputPositional properties in Spark Batch Jobs

Warning

The streaming version of this component is available in the Palette of the studio on the condition that you have subscribed to Talend Real-time Big Data Platform or Talend Data Fabric.

Component family

File/Input

 

Basic settings

Define a storage configuration component

Select the configuration component to be used to provide the configuration information for the connection to the target file system such as HDFS or S3.

If you leave this check box clear, the target file system is the local system.

Note that the configuration component to be used must be present in the same Job. For example, if you have dropped a tHDFSConfiguration component in the Job, you can select it to write the result in a given HDFS system.

 

Property type

Either Built-In or Repository.

  

Built-In: No property data stored centrally.

  

Repository: Select the repository file where the properties are stored.

The properties are stored centrally under the Hadoop Cluster node of the Repository tree.

The fields that come after are pre-filled in using the fetched data.

For further information about the Hadoop Cluster node, see the Getting Started Guide.

 

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema. Note that if you make changes, the schema automatically becomes built-in.

  

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

  

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

 

Folder/File

Browse to, or enter the path pointing to the data to be used in the file system.

If the path you set points to a folder, this component will read all of the files stored in that folder, for example, /user/talend/in; if sub-folders exist, the sub-folders are automatically ignored unless you define the path like /user/talend/in/*.

If you want to specify more than one files or directories in this field, separate each path using a comma (,).

If the file to be read is a compressed one, enter the file name with its extension; then tFileInputPositional automatically decompresses it at runtime. The supported compression formats and their corresponding extensions are:

  • DEFLATE: *.deflate

  • gzip: *.gz

  • bzip2: *.bz2

  • LZO: *.lzo

Note that you need to ensure you have properly configured the connection in the configuration component you have selected from the configuration component list.

 

Die on error

Select this check box to stop the execution of the Job when an error occurs.

 

Row separator

Enter the separator used to identify the end of a row.

 

Customize

Select this check box to customize the data format of the positional file and define the table columns:

Column: Select the column you want to customize.

Size: Enter the column size.

Padding char: Enter, between double quotation marks, the padding charater you need to remove from the field. A space by default.

Alignment: Select the appropriate alignment parameter.

 

Pattern

Enter between double quotes the length values separated by commas, interpreted as a string. Make sure the values entered in this field are consistent with the schema defined.

 

Header

Enter the number of rows to be skipped in the beginning of file.

For example, enter 0 to ignore no rows for the data without header and set 1 for the data with header at the first row.

 

Skip empty rows

Select this check box to skip the empty rows.

Advanced settings

Custom Encoding

You may encounter encoding issues when you process the stored data. In that situation, select this check box to display the Encoding list.

Select the encoding from the list or select Custom and define it manually.

 

Advanced separator (for number)

Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.).

 

Trim columns

Select this check box to remove the leading and trailing whitespaces from all columns. When this check box is cleared, the Check column to trim table is displayed, which lets you select particular columns to trim.

Usage in Spark Batch Jobs

In a Talend Spark Batch Job, it is used as a start component and requires an output link. The other components used along with it must be Spark Batch components, too. They generate native Spark code that can be executed directly in a Spark cluster.

This component, along with the Spark Batch component Palette it belongs to, appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs.

Log4j

If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Spark Connection

You need to use the Spark Configuration tab in the Run view to define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, one and only one file system related component from the Storage family is required in the same Job so that Spark can use this component to connect to the file system to which the jar files dependent on the Job are transferred:

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Spark Batch version of this component yet.

tFileInputPositional properties in Spark Streaming Jobs

Warning

The streaming version of this component is available in the Palette of the studio on the condition that you have subscribed to Talend Real-time Big Data Platform or Talend Data Fabric.

Component family

File/Input

 

Basic settings

Define a storage configuration component

Select the configuration component to be used to provide the configuration information for the connection to the target file system such as HDFS or S3.

If you leave this check box clear, the target file system is the local system.

Note that the configuration component to be used must be present in the same Job. For example, if you have dropped a tHDFSConfiguration component in the Job, you can select it to write the result in a given HDFS system.

 

Property type

Either Built-In or Repository.

  

Built-In: No property data stored centrally.

  

Repository: Select the repository file where the properties are stored.

The properties are stored centrally under the Hadoop Cluster node of the Repository tree.

The fields that come after are pre-filled in using the fetched data.

For further information about the Hadoop Cluster node, see the Getting Started Guide.

 

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema. Note that if you make changes, the schema automatically becomes built-in.

  

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

  

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

 

Folder/File

Browse to, or enter the path pointing to the data to be used in the file system.

If the path you set points to a folder, this component will read all of the files stored in that folder, for example, /user/talend/in; if sub-folders exist, the sub-folders are automatically ignored unless you define the path like /user/talend/in/*.

If you want to specify more than one files or directories in this field, separate each path using a comma (,).

If the file to be read is a compressed one, enter the file name with its extension; then tFileInputPositional automatically decompresses it at runtime. The supported compression formats and their corresponding extensions are:

  • DEFLATE: *.deflate

  • gzip: *.gz

  • bzip2: *.bz2

  • LZO: *.lzo

Note that you need to ensure you have properly configured the connection in the configuration component you have selected from the configuration component list.

 

Die on error

Select this check box to stop the execution of the Job when an error occurs.

 

Row separator

Enter the separator used to identify the end of a row.

 

Customize

Select this check box to customize the data format of the positional file and define the table columns:

Column: Select the column you want to customize.

Size: Enter the column size.

Padding char: Enter, between double quotation marks, the padding charater you need to remove from the field. A space by default.

Alignment: Select the appropriate alignment parameter.

 

Pattern

Enter between double quotes the length values separated by commas, interpreted as a string. Make sure the values entered in this field are consistent with the schema defined.

 

Header

Enter the number of rows to be skipped in the beginning of file.

For example, enter 0 to ignore no rows for the data without header and set 1 for the data with header at the first row.

 

Skip empty rows

Select this check box to skip the empty rows.

Advanced settings

Custom Encoding

You may encounter encoding issues when you process the stored data. In that situation, select this check box to display the Encoding list.

Select the encoding from the list or select Custom and define it manually.

 

Advanced separator (for number)

Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.).

 

Trim columns

Select this check box to remove the leading and trailing whitespaces from all columns. When this check box is cleared, the Check column to trim table is displayed, which lets you select particular columns to trim.

Usage in Spark Streaming Jobs

In a Talend Spark Streaming Job, it is used as a start component and requires an output link. The other components used along with it must be Spark Streaming components, too. They generate native Spark code that can be executed directly in a Spark cluster.

This component is only used to provide the lookup flow (the right side of a join operation) to the main flow of a tMap component. In this situation, the lookup model used by this tMap must be Load once.

This component, along with the Spark Streaming component Palette it belongs to, appears only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs.

Log4j

If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Spark Connection

You need to use the Spark Configuration tab in the Run view to define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, one and only one file system related component from the Storage family is required in the same Job so that Spark can use this component to connect to the file system to which the jar files dependent on the Job are transferred:

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Spark Streaming version of this component yet.