tFileInputXML - 6.1

Talend Components Reference Guide

EnrichVersion
6.1
EnrichProdName
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for Data Quality
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Real-Time Big Data Platform
task
Data Governance
Data Quality and Preparation
Design and Development
EnrichPlatform
Talend Studio

Function

tFileInputXML reads an XML structured file and extracts data row by row.

Purpose

Opens an XML structured file and reads it row by row to split them up into fields then sends fields as defined in the Schema to the next component, via a Row link.

If you have subscribed to one of the Talend solutions with Big Data, this component is available in the following types of Jobs:

tFileInputXML Properties

Component family

XML or File/Input

 

Basic settings

Property type

Either Built-In or Repository.

 

 

Built-In: No property data stored centrally.

 

 

Repository: Select the repository file where the properties are stored.

 

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

 

 

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

 

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

 

File name/Stream

File name: Name and path of the file to be processed.

Stream: The data flow to be processed. The data must be added to the flow in order for tFileInputXML to fetch these data via the corresponding representative variable.

This variable could be already pre-defined in your Studio or provided by the context or the components you are using along with this component, for example, the INPUT_STREAM variable of tFileFetch; otherwise, you could define it manually and use it according to the design of your Job, for example, using tJava or tJavaFlex.

In order to avoid the inconvenience of hand writing, you could select the variable of interest from the auto-completion list (Ctrl+Space) to fill the current field on condition that this variable has been properly defined.

Related topic to the available variables: see Talend Studio User Guide. Related scenario to the input stream, see Scenario 2: Reading data from a remote file in streaming mode.

 

Loop XPath query

Node of the tree, which the loop is based on.

 

Mapping

Column: Columns to map. They reflect the schema as defined in the Schema type field.

XPath Query: Enter the fields to be extracted from the structured input.

Get nodes: Select this check box to recuperate the XML content of all current nodes specified in the Xpath query list, or select the check box next to specific XML nodes to recuperate only the content of the selected nodes. These nodes are important when the output flow from this component needs to use the XML structure, for example, the Document data type.

For further information about the Document type, see Talend Studio User Guide.

Note

The Get Nodes option functions in the DOM4j and SAX modes, although in SAX mode namespaces are not supported. For further information concerning the DOM4j and SAX modes, please see the properties noted in the Generation mode list of the Advanced Settings tab.

 

Limit

Maximum number of rows to be processed. If Limit = 0, no row is read nor processed. If -1, all rows are read or processed.

 

Die on error

Select this check box to stop the execution of the Job when an error occurs.

Clear the check box to skip any rows on error and complete the process for error-free rows. When errors are skipped, you can collect the rows on error using a Row > Reject link.

Advanced settings Ignore DTD file

Select this check box to ignore the DTD file indicated in the XML file being processed.

 

Advanced separator (for number)

Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.).

Thousands separator: define the separators to use for thousands.

Decimal separator: define the separators to use for decimals.

 

Ignore the namespaces

Select this check box to ignore name spaces.

Generate a temporary file: click the three-dot button to browse to the XML temporary file and set its path in the field.

 

Use Separator for mode Xerces

Select this check box if you want to separate concatenated children node values.

Note

This field can only be used if the selected Generation mode is Xerces.

The following field displays:

Field separator: Define the delimiter to be used to separate the children node values.

 

Encoding

Select the encoding from the list or select Custom and define it manually. This field is compulsory for database data handling.

 

Generation mode

From the drop-down list select the generation mode for the XML file, according to the memory available and the desired speed:

  • Slow and memory-consuming (Dom4j)

    Note

    This option allows you to use dom4j to process the XML files of high complexity.

  • Memory-consuming (Xerces).

  • Fast with low memory consumption (SAX)

 

Validate date

Select this check box to check the date format strictly against the input schema.

 

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at a Job level as well as at each component level.

Global Variables

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

NB_LINE: the number of rows processed. This is an After variable and it returns an integer.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.

Usage

tFileInputXML is for use as an entry componant. It allows you to create a flow of XML data using a Row > Main link. You can also create a rejection flow using a Row > Reject link to filter the data which doesn't correspond to the type defined. For an example of how to use these two links, see Scenario 2: Extracting correct and erroneous data from an XML field in a delimited file.

Log4j

If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Limitation

n/a

Scenario 1: Reading and extracting data from an XML structure

This scenario describes a basic Job that reads a defined XML directory and extracts specific information and outputs it on the Run console via a tLogRow component.

  1. Drop tFileInputXML and tLogRow from the Palette to the design workspace.

  2. Connect both components together using a Main Row link.

  3. Double-click tFileInputXML to open its Basic settings view and define the component properties.

  4. As the street dir file used as input file has been previously defined in the Metadata area, select Repository as Property type. This way, the properties are automatically leveraged and the rest of the properties fields are filled in (apart from Schema). For more information regarding the metadata creation wizards, see Talend Studio User Guide.

  5. Select the same way the relevant schema in the Repository metadata list. Edit schema if you want to make any change to the schema loaded.

  6. The Filename shows the structured file to be used as input

  7. In Loop XPath query, change if needed the node of the structure where the loop is based.

  8. On the Mapping table, fill the fields to be extracted and displayed in the output.

  9. If the file size is consequent, fill in a Limit of rows to be read.

  10. Enter the encoding if needed then double-click on tLogRow to define the separator character.

  11. Save your Job and press F6 to execute it.

The fields defined in the input properties are extracted from the XML structure and displayed on the console.

Scenario 2: Extracting erroneous XML data via a reject flow

This Java scenario describes a three-component Job that reads an XML file and:

  1. first, returns correct XML data in an output XML file,

  2. and second, displays on the console erroneous XML data which type does not correspond to the defined one in the schema.

  1. Drop the following components from the Palette to the design workspace: tFileInputXML, tFileOutputXML and tLogRow.

    Right-click tFileInputXML and select Row > Main in the contextual menu and then click tFileOutputXML to connect the components together.

    Right-click tFileInputXML and select Row > Reject in the contextual menu and then click tLogRow to connect the components together using a reject link.

  2. Double-click tFileInputXML to display the Basic settings view and define the component properties.

  3. In the Property Type list, select Repository and click the three-dot button next to the field to display the [Repository Content] dialog box where you can select the metadata relative to the input file if you have already stored it in the File xml node under the Metadata folder of the Repository tree view. The fields that follow are automatically filled with the fetched data. If not, select Built-in and fill in the fields that follow manually.

    For more information about storing schema metadat in the Repository tree view, see Talend Studio User Guide.

  4. In the Schema Type list, select Repository and click the three-dot button to open the dialog box where you can select the schema that describe the structure of the input file if you have already stored it in the Repository tree view. If not, select Built-in and click the three-dot button next to Edit schema to open a dialog box where you can define the schema manually.

    The schema in this example consists of five columns: id, CustomerName, CustomerAddress, idState and id2.

  5. Click the three-dot button next to the Filename field and browse to the XML file you want to process.

  6. In the Loop XPath query, enter between inverted commas the path of the XML node on which to loop in order to retrieve data.

    In the Mapping table, Column is automatically populated with the defined schema.

    In the XPath query column, enter between inverted commas the node of the XML file that holds the data you want to extract from the corresponding column.

  7. In the Limit field, enter the number of lines to be processed, the first 10 lines in this example.

  8. Double-click tFileOutputXML to display its Basic settings view and define the component properties.

  9. Click the three-dot button next to the File Name field and browse to the output XML file you want to collect data in, customer_data.xml in this example.

    In the Row tag field, enter between inverted commas the name you want to give to the tag that will hold the recuperated data.

    Click Edit schema to display the schema dialog box and make sure that the schema matches that of the preceding component. If not, click Sync columns to retrieve the schema from the preceding component.

  10. Double-click tLogRow to display its Basic settings view and define the component properties.

    Click Edit schema to open the schema dialog box and make sure that the schema matches that of the preceding component. If not, click Sync columns to retrieve the schema of the preceding component.

    In the Mode area, select the Vertical option.

  11. Save your Job and press F6 to execute it.

The output file customer_data.xml holding the correct XML data is created in the defined path and erroneous XML data is displayed on the console of the Run view.

tFileInputXML in Talend Map/Reduce Jobs

Warning

The information in this section is only for users that have subscribed to one of the Talend solutions with Big Data and is not applicable to Talend Open Studio for Big Data users.

In a Talend Map/Reduce Job, tFileInputXML, as well as the whole Map/Reduce Job using it, generates native Map/Reduce code. This section presents the specific properties of tFileInputXML when it is used in that situation. For further information about a Talend Map/Reduce Job, see the Talend Big Data Getting Started Guide.

Component family

MapReduce/Input

 

Basic settings

Property type

Either Built-In or Repository.

 

 

Built-In: No property data stored centrally.

 

 

Repository: Select the repository file where the properties are stored.

The properties are stored centrally under the Hadoop Cluster node of the Repository tree.

The fields that come after are pre-filled in using the fetched data.

For further information about the Hadoop Cluster node, see the Getting Started Guide.

 

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

 

 

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

 

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

 

Folder/File

Browse to, or enter the path pointing to the data to be used in the file system.

If the path you set points to a folder, this component will read all of the files stored in that folder, for example, /user/talend/in; if sub-folders exist, the sub-folders are automatically ignored unless you define the path like /user/talend/in/*.

If you want to specify more than one files or directories in this field, separate each path using a comma (,).

If the file to be read is a compressed one, enter the file name with its extension; then ttFileInputXML automatically decompresses it at runtime. The supported compression formats and their corresponding extensions are:

  • DEFLATE: *.deflate

  • gzip: *.gz

  • bzip2: *.bz2

  • LZO: *.lzo

Note that you need to ensure you have properly configured the connection to the Hadoop distribution to be used in the Hadoop configuration tab in the Run view.

 

Element to extract

Enter the element from which you need to read the contents and the child elements of the input XML data.

The element defined in this field is used at the root node of any XPath specified within this component. This element helps define the atomic units of the XML data to be used so that however big the original document is or wherever the input is split, the rows within this element can be correctly distributed to the mapper tasks.

Note that any content outside this element is ignored and the child elements of this element cannot contain this element itself.

 

Loop XPath query

Node of the tree, which the loop is based on.

Note its root is the element you have defined in the Element to extract field.

 

Mapping

Column: Columns to map. They reflect the schema as defined in the Schema type field.

XPath Query: Enter the fields to be extracted from the structured input.

Get nodes: Select this check box to recuperate the XML content of all current nodes specified in the Xpath query list, or select the check box next to specific XML nodes to recuperate only the content of the selected nodes. These nodes are important when the output flow from this component needs to use the XML structure, for example, the Document data type.

For further information about the Document type, see Talend Studio User Guide.

 

Die on error

Select this check box to stop the execution of the Job when an error occurs.

Clear the check box to skip any rows on error and complete the process for error-free rows. When errors are skipped, you can collect the rows on error using a Row > Reject link.

Advanced settings

Ignore the namespaces

Select this check box to ignore name spaces.

 

Custom encoding

You may encounter encoding issues when you process the stored data. In that situation, select this check box to display the Encoding list.

Select the encoding from the list or select Custom and define it manually. This field is compulsory for database data handling.

Global Variables

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.

Usage in Map/Reduce Jobs

Because of the characteristics of the MapReduce framework, the Map/Reduce version of tFileInputXML does not support none of the following XML parsers: the DOM-based parsers, the SAX-based parsers and the streaming-based parsers.

In a Talend Map/Reduce Job, it is used as a start component and requires a transformation component as output link. The other components used along with it must be Map/Reduce components, too. They generate native Map/Reduce code that can be executed directly in Hadoop.

Once a Map/Reduce Job is opened in the workspace, tFileInputXML as well as the MapReduce family appears in the Palette of the Studio.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs, and non Map/Reduce Jobs.

Hadoop Connection

You need to use the Hadoop Configuration tab in the Run view to define the connection to a given Hadoop distribution for the whole Job.

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Map/Reduce version of this component yet.

tFileInputXML Properties in Spark Batch Jobs

Component family

File / Input

 

Basic settings

Define a storage configuration component

Select the configuration component to be used to provide the configuration information for the connection to the target file system such as HDFS or S3.

If you leave this check box clear, the target file system is the local system.

Note that the configuration component to be used must be present in the same Job. For example, if you have dropped a tHDFSConfiguration component in the Job, you can select it to write the result in a given HDFS system.

 

Property type

Either Built-In or Repository.

 

 

Built-In: No property data stored centrally.

 

 

Repository: Select the repository file where the properties are stored.

The properties are stored centrally under the Hadoop Cluster node of the Repository tree.

The fields that come after are pre-filled in using the fetched data.

For further information about the Hadoop Cluster node, see the Getting Started Guide.

 

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

 

 

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

 

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

 

Folder/File

Browse to, or enter the path pointing to the data to be used in the file system.

If the path you set points to a folder, this component will read all of the files stored in that folder, for example, /user/talend/in; if sub-folders exist, the sub-folders are automatically ignored unless you define the path like /user/talend/in/*.

If you want to specify more than one files or directories in this field, separate each path using a comma (,).

If the file to be read is a compressed one, enter the file name with its extension; then ttFileInputXML automatically decompresses it at runtime. The supported compression formats and their corresponding extensions are:

  • DEFLATE: *.deflate

  • gzip: *.gz

  • bzip2: *.bz2

  • LZO: *.lzo

Note that you need to ensure you have properly configured the connection in the configuration component you have selected from the configuration component list.

 

Element to extract

Enter the element from which you need to read the contents and the child elements of the input XML data.

The element defined in this field is used at the root node of any XPath specified within this component. This element helps define the atomic units of the XML data to be used so that however big the original document is or wherever the input is split, the rows within this element can be correctly distributed to the mapper tasks.

Note that any content outside this element is ignored and the child elements of this element cannot contain this element itself.

 

Loop XPath query

Node of the tree, which the loop is based on.

Note its root is the element you have defined in the Element to extract field.

 

Mapping

Column: Columns to map. They reflect the schema as defined in the Schema type field.

XPath Query: Enter the fields to be extracted from the structured input.

Get nodes: Select this check box to recuperate the XML content of all current nodes specified in the Xpath query list, or select the check box next to specific XML nodes to recuperate only the content of the selected nodes. These nodes are important when the output flow from this component needs to use the XML structure, for example, the Document data type.

For further information about the Document type, see Talend Studio User Guide.

 

Die on error

Select this check box to stop the execution of the Job when an error occurs.

 Advanced settings

Custom encoding

You may encounter encoding issues when you process the stored data. In that situation, select this check box to display the Encoding list.

Select the encoding from the list or select Custom and define it manually. This field is compulsory for database data handling.

Usage in Spark Batch Jobs

In a Talend Spark Batch Job, it is used as a start component and requires an output link. The other components used along with it must be Spark Batch components, too. They generate native Spark code that can be executed directly in a Spark cluster.

This component, along with the Spark Batch component Palette it belongs to, appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs.

Log4j

If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Spark Connection

You need to use the Spark Configuration tab in the Run view to define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, one and only one file system related component from the Storage family is required in the same Job so that Spark can use this component to connect to the file system to which the jar files dependent on the Job are transferred:

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Spark Batch version of this component yet.

tFileInputXML Properties in Spark Streaming Jobs

Warning

The streaming version of this component is available in the Palette of the studio on the condition that you have subscribed to Talend Real-time Big Data Platform or Talend Data Fabric.

Component family

File / Input

 

Basic settings

Define a storage configuration component

Select the configuration component to be used to provide the configuration information for the connection to the target file system such as HDFS or S3.

If you leave this check box clear, the target file system is the local system.

Note that the configuration component to be used must be present in the same Job. For example, if you have dropped a tHDFSConfiguration component in the Job, you can select it to write the result in a given HDFS system.

 

Property type

Either Built-In or Repository.

 

 

Built-In: No property data stored centrally.

 

 

Repository: Select the repository file where the properties are stored.

The properties are stored centrally under the Hadoop Cluster node of the Repository tree.

The fields that come after are pre-filled in using the fetched data.

For further information about the Hadoop Cluster node, see the Getting Started Guide.

 

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

 

 

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

 

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

 

Folder/File

Browse to, or enter the path pointing to the data to be used in the file system.

If the path you set points to a folder, this component will read all of the files stored in that folder, for example, /user/talend/in; if sub-folders exist, the sub-folders are automatically ignored unless you define the path like /user/talend/in/*.

If you want to specify more than one files or directories in this field, separate each path using a comma (,).

If the file to be read is a compressed one, enter the file name with its extension; then ttFileInputXML automatically decompresses it at runtime. The supported compression formats and their corresponding extensions are:

  • DEFLATE: *.deflate

  • gzip: *.gz

  • bzip2: *.bz2

  • LZO: *.lzo

Note that you need to ensure you have properly configured the connection in the configuration component you have selected from the configuration component list.

 

Element to extract

Enter the element from which you need to read the contents and the child elements of the input XML data.

The element defined in this field is used at the root node of any XPath specified within this component. This element helps define the atomic units of the XML data to be used so that however big the original document is or wherever the input is split, the rows within this element can be correctly distributed to the mapper tasks.

Note that any content outside this element is ignored and the child elements of this element cannot contain this element itself.

 

Loop XPath query

Node of the tree, which the loop is based on.

Note its root is the element you have defined in the Element to extract field.

 

Mapping

Column: Columns to map. They reflect the schema as defined in the Schema type field.

XPath Query: Enter the fields to be extracted from the structured input.

Get nodes: Select this check box to recuperate the XML content of all current nodes specified in the Xpath query list, or select the check box next to specific XML nodes to recuperate only the content of the selected nodes. These nodes are important when the output flow from this component needs to use the XML structure, for example, the Document data type.

For further information about the Document type, see Talend Studio User Guide.

 

Die on error

Select this check box to stop the execution of the Job when an error occurs.

 Advanced settings

Custom encoding

You may encounter encoding issues when you process the stored data. In that situation, select this check box to display the Encoding list.

Select the encoding from the list or select Custom and define it manually. This field is compulsory for database data handling.

Usage in Spark Streaming Jobs

In a Talend Spark Streaming Job, it is used as a start component and requires an output link. The other components used along with it must be Spark Streaming components, too. They generate native Spark code that can be executed directly in a Spark cluster.

This component is only used to provide the lookup flow (the right side of a join operation) to the main flow of a tMap component. In this situation, the lookup model used by this tMap must be Load once.

This component, along with the Spark Streaming component Palette it belongs to, appears only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs.

Log4j

If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Spark Connection

You need to use the Spark Configuration tab in the Run view to define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, one and only one file system related component from the Storage family is required in the same Job so that Spark can use this component to connect to the file system to which the jar files dependent on the Job are transferred:

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Spark Streaming version of this component yet.