tFileInputJSON - 6.1

Talend Components Reference Guide

EnrichVersion
6.1
EnrichProdName
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for Data Quality
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Real-Time Big Data Platform
task
Data Governance
Data Quality and Preparation
Design and Development
EnrichPlatform
Talend Studio

Function

tFileInputJSON extracts JSON data from a file.

Purpose

tFileInputJSON extracts JSON data from a file, then transfers the data to a file, a database table, etc.

If you have subscribed to one of the Talend solutions with Big Data, this component is available in the following types of Jobs:

tFileInputJSON properties

Component Family

File / Input

 

Basic settings

Property Type

Either Built-In or Repository.

 

 

Built-In: No property data stored centrally.

 

 

Repository: Select the repository file where the properties are stored.

 

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

 

 

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

 

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

 

Read By

Select a way of extracting the JSON data in the file.

  • Xpath: Extracts the JSON data based on the XPath query.

  • JsonPath: Extracts the JSON data based on the JSONPath query. Note that it is recommended to read the data by JSONPath in order to gain better performance.

  • JsonPath without loop: Extracts the JSON data based on the JSONPath query without setting a loop node.

 

Use Url

Select this check box to retrieve data directly from the Web.

 

URL

Enter the URL path from which you will retrieve data.

This field is available only when the Use Url check box is selected.

 

Filename

Specify the file from which you will retrieve data.

This field is not visible if the Use Url check box is selected.

 

Loop Jsonpath query

Enter the path pointing to the node within the JSON field, on which the loop is based.

Note if you have selected Xpath from the Read by drop-down list, the Loop Xpath query field is displayed instead.

 

Mapping

Complete this table to map the columns defined in the schema to the corresponding JSON nodes.

  • Column: The Column cells are automatically filled with the defined schema column names.

  • Json query/JSONPath query: Specify the JSONPath node that holds the desired data. For more information about JSONPath expressions, see http://goessner.net/articles/JsonPath/.

    This column is available only when JsonPath is selected from the Read By list.

  • XPath query: Specify the XPath node that holds the desired data.

    This column is available only when Xpath is selected from the Read By list.

  • Get Nodes: Select this check box to extract the JSON data of all the nodes or select the check box next to a specific node to extract the data of that node.

    This column is available only when Xpath is selected from the Read By list.

 

Die on error

Select this check box to stop the execution of the Job when an error occurs. Clear the check box to skip the row on error and complete the process for error-free rows. If needed, you can collect the rows on error using a Row > Reject link.

Advanced settings

Advanced separator (for numbers)

Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.).

Thousands separator: define separators for thousands.

Decimal separator: define separators for decimals.

 

Validate date

Select this check box to check the date format strictly against the input schema.

This check box is available only if the Read By XPath check box is selected.

 

Encoding

Select the encoding type from the list or select Custom and define it manually. This field is compulsory for DB data handling.

 

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at a Job level as well as at each component level.

Global Variables

NB_LINE: the number of rows processed. This is an After variable and it returns an integer.

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.

Usage

This component is a start component of a Job and always needs an output link.

Log4j

If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Scenario 1: Extracting JSON data from a file using JSONPath without setting a loop node

This scenario describes a two-component Job that extracts data from the JSON file Store.json by specifying the complete JSON path for each node of interest and displays the flat data extracted on the console.

The JSON file Store.json contains information about a department store and the content of the file is as follows:

{"store": {
    "name": "Sunshine Department Store",
    "address": "Wangfujing Street",
    "goods": {
        "book": [
            {
                "category": "Reference",
                "title": "Sayings of the Century",
                "author": "Nigel Rees",
                "price": 8.88
            },
            {
                "category": "Fiction",
                "title": "Sword of Honour",
                "author": "Evelyn Waugh",
                "price": 12.66
            }
        ],
        "bicycle": {
            "type": "GIANT OCR2600",
            "color": "White",
            "price": 276
        }
    }
}}

In the following example, we will extract the store name, the store address, and the bicycle information from this file.

Adding and linking the components

  1. Create a new Job and add a tFileInputJSON component and a tLogRow component by typing their names in the design workspace or dropping them from the Palette.

  2. Link the tFileInputJSON component to the tLogRow component using a Row > Main connection.

Configuring the components

  1. Double-click the tFileInputJSON component to open its Basic settings view.

  2. Select JsonPath without loop from the Read By drop-down list. With this option, you need to specify the complete JSON path for each node of interest in the JSONPath query fields of the Mapping table.

  3. Click the [...] button next to Edit schema to open the schema editor.

  4. Click the [+] button to add five columns, store_name, store_address, bicycle_type, and bicycle_color of String type, and bicycle_price of Double type.

    Click OK to close the schema editor. In the pop-up dialog box, click Yes to propagate the schema to the subsequent component.

  5. In the Filename field, specify the path to the JSON file that contains the data to be extracted. In this example, it is "E:/Store.json".

  6. In the Mapping table, the Column fields are automatically filled with the schema columns you have defined.

    In the JSONPath query fields, enter the JSONPath query expressions between double quotation marks to specify the nodes that hold the desired data.

    • For the columns store_name and store_address, enter the JSONPath query expressions "$.store.name" and "$.store.address" relative to the nodes name and address respectively.

    • For the columns bicycle_type, bicycle_color, and bicycle_price, enter the JSONPath query expressions "$.store.goods.bicycle.type", "$.store.goods.bicycle.color", and "$.store.goods.bicycle.price" relative to the child nodes type, color, and price of the bicycle node respectively.

  7. Double-click the tLogRow component to display its Basic settings view.

  8. In the Mode area, select Table (print values in cells of a table) for better readability of the result.

Executing the Job

  1. Press Ctrl+S to save the Job.

  2. Press F6 to execute the Job.

    As shown above, the store name, the store address, and the bicycle information are extracted from the source JSON data and displayed in a flat table on the console.

Scenario 2: Extracting JSON data from a file using JSONPath

Based on Scenario 1: Extracting JSON data from a file using JSONPath without setting a loop node, this scenario extracts data under the book array of the JSON file Store.json by specifying a loop node and the relative JSON path for each node of interest, and then displays the flat data extracted on the console.

  1. In the Studio, open the Job used in Scenario 1: Extracting JSON data from a file using JSONPath without setting a loop node to display it in the design workspace.

  2. Double-click the tFileInputJSON component to open its Basic settings view.

  3. Select JsonPath from the Read By drop-down list.

  4. In the Loop Json query field, enter the JSONPath query expression between double quotation marks to specify the node on which the loop is based. In this example, it is "$.store.goods.book[*]".

  5. Click the [...] button next to Edit schema to open the schema editor.

    Select the five columns added previously and click the [x] button to remove all of them.

    Click the [+] button to add four columns, book_title, book_category, and book_author of String type, and book_price of Double type.

    Click OK to close the schema editor. In the pop-up dialog box, click Yes to propagate the schema to the subsequent component.

  6. In the Json query fields of the Mapping table, enter the JSONPath query expressions between double quotation marks to specify the nodes that hold the desired data. In this example, enter the JSONPath query expressions "title", "category", "author", and "price" relative to the four child nodes of the book node respectively.

  7. Press Ctrl+S to save the Job.

  8. Press F6 to execute the Job.

    As shown above, the book information is extracted from the source JSON data and displayed in a flat table on the console.

Scenario 3: Extracting JSON data from a file using XPath

Based on Scenario 1: Extracting JSON data from a file using JSONPath without setting a loop node, this scenario extracts the store name and the book information from the JSON file Store.json using XPath queries and displays the flat data extracted on the console.

  1. In the Studio, open the Job used in Scenario 1: Extracting JSON data from a file using JSONPath without setting a loop node to display it in the design workspace.

  2. Double-click the tFileInputJSON component to open its Basic settings view.

  3. Select Xpath from the Read By drop-down list.

  4. Click the [...] button next to Edit schema to open the schema editor.

    Select the five columns added previously and click the [x] button to remove all of them.

    Click the [+] button to add five columns, store_name, book_title, book_category, and book_author of String type, and book_price of Double type.

    Click OK to close the schema editor. In the pop-up dialog box, click Yes to propagate the schema to the subsequent component.

  5. In the Loop XPath query field, enter the XPath query expression between double quotation marks to specify the node on which the loop is based. In this example, it is "/store/goods/book".

  6. In the XPath query fields of the Mapping table, enter the XPath query expressions between double quotation marks to specify the nodes that hold the desired data.

    • For the column store_name, enter the XPath query "../../name" relative to the name node.

    • For the columns book_title, book_category, book_author, and book_price, enter the XPath query expressions "title", "category", "author", and "price" relative to the four child nodes of the book node respectively.

  7. Press Ctrl+S to save the Job.

  8. Press F6 to execute the Job.

    As shown above, the store name and the book information are extracted from the source JSON data and displayed in a flat table on the console.

Scenario 4: Extracting JSON data from a URL

In this scenario, tFileInputJSON retrieves data of the friends node from the JSON file facebook.json on the Web that contains the data of a Facebook user and tExtractJSONFields extracts the data from the friends node for flat data output.

The JSON file facebook.json is deployed on the Tomcat server, specifically, located in the folder <tomcat path>/webapps/docs, and the content of the file is as follows:

{"user": {
    "id": "9999912398",
    "name": "Kelly Clarkson",
    "friends": [
        {
            "name": "Tom Cruise",
            "id": "55555555555555",
            "likes": {"data": [
                {
                    "category": "Movie",
                    "name": "The Shawshank Redemption",
                    "id": "103636093053996",
                    "created_time": "2012-11-20T15:52:07+0000"
                },
                {
                    "category": "Community",
                    "name": "Positiveretribution",
                    "id": "471389562899413",
                    "created_time": "2012-12-16T21:13:26+0000"
                }
            ]}
        },
        {
            "name": "Tom Hanks",
            "id": "88888888888888",
            "likes": {"data": [
                {
                    "category": "Journalist",
                    "name": "Janelle Wang",
                    "id": "136009823148851",
                    "created_time": "2013-01-01T08:22:17+0000"
                },
                {
                    "category": "Tv show",
                    "name": "Now With Alex Wagner",
                    "id": "305948749433410",
                    "created_time": "2012-11-20T06:14:10+0000"
                }
            ]}
        }
    ]
}}

Adding and linking the components

  1. Create a new Job and add a tFileInputJSON component, a tExtractJSONFields component, and two tLogRow components by typing their names in the design workspace or dropping them from the Palette.

  2. Link the tFileInputJSON component to the first tLogRow component using a Row > Main connection.

  3. Link the first tLogRow component to the tExtractJSONFields component using a Row > Main connection.

  4. Link the tExtractJSONFields component to the second tLogRow component using a Row > Main connection.

Configuring the components

  1. Double-click the tFileInputJSON component to open its Basic settings view.

  2. Select JsonPath without loop from the Read By drop-down list. Then select the Use Url check box and in the URL field displayed enter the URL of the file facebook.json from which the data will be retrieved. In this example, it is http://localhost:8080/docs/facebook.json.

  3. Click the [...] button next to Edit schema and in the [Schema] dialog box define the schema by adding one column friends of String type.

    Click OK to close the dialog box and accept the propogation prompted by the pop-up dialog box.

  4. In the Mapping table, enter the JSONPath query "$.user.friends[*]" next to the friends column to retrieve the entire friends node from the source file.

  5. Double-click tExtractJSONFields to open its Basic settings view.

  6. Select Xpath from the Read By drop-down list.

  7. In the Loop XPath query field, enter the XPath expression between double quotation marks to specify the node on which the loop is based. In this example, it is "/likes/data".

  8. Click the [...] button next to Edit schema and in the [Schema] dialog box define the schema by adding five columns of String type, id, name, like_id, like_name, and like_category, which will hold the data of relevant nodes under the JSON field friends.

    Click OK to close the dialog box and accept the propogation prompted by the pop-up dialog box.

  9. In the XPath query fields of the Mapping table, type in the XPath query expressions between double quotation marks to specify the JSON nodes that hold the desired data. In this example,

    • "../../id" (querying the "/friends/id" node) for the column id,

    • "../../name" (querying the "/friends/name" node) for the column name,

    • "id" for the column like_id,

    • "name" for the column like_name, and

    • "category" for the column like_category.

  10. Double-click the second tLogRow component to open its Basic settings view.

    In the Mode area, select Table (print values in cells of a table) for better readability of the result.

Executing the Job

  1. Press Ctrl + S to save the Job.

  2. Click F6 to execute the Job.

    As shown above, the friends data in the JSON file specified using the URL is extracted and then the data from the node friends is extracted and displayed in a flat table.

tFileInputJSON in Talend Map/Reduce Jobs

Warning

This component will be available in the Palette of Talend Studio on the condition that you have subscribed to one of the Talend solutions with Big Data.

In a Talend Map/Reduce Job, tFileInputJSON, as well as the whole Map/Reduce Job using it, generates native Map/Reduce code. This section presents the specific properties of tFileInputJSON when it is used in that situation. For further information about a Talend Map/Reduce Job, see the Talend Big Data Getting Started Guide.

Component family

MapReduce / Input

 

Basic settings

Property type

Either Built-In or Repository.

  

Built-In: No property data stored centrally.

  

Repository: Select the repository file where the properties are stored.

The fields that come after are pre-filled in using the fetched data.

For further information about the File Json node, see the section about setting up a JSON file schema in Talend StudioUser Guide.

 

Schema et Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

  

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

  

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

 

Read by

Select a way of extracting the JSON data in the file.

  • Xpath: Extracts the JSON data based on the XPath query.

  • JsonPath: Extracts the JSON data based on the JSONPath query. Note that it is recommended to read the data by JSONPath in order to gain better performance.

 

Folder/File

Enter the path to the file or folder on HDFS from which the data will be extracted.

If the path you entered points to a folder, all files stored in that folder will be read.

If the file to be read is a compressed one, enter the file name with its extension; then tFileInputJSON automatically decompresses it at runtime. The supported compression formats and their corresponding extensions are:

  • DEFLATE: *.deflate

  • gzip: *.gz

  • bzip2: *.bz2

  • LZO: *.lzo

Note that you need to ensure you have properly configured the connection to the Hadoop distribution to be used in the Hadoop configuration tab in the Run view.

 

Die on error

Select this check box to stop the execution of the Job when an error occurs.

Clear the check box to skip any rows on error and complete the process for error-free rows. When errors are skipped, you can collect the rows on error using a Row > Reject link.

 

Loop Jsonpath query

Enter the path pointing to the node within the JSON field, on which the loop is based.

Note if you have selected Xpath from the Read by drop-down list, the Loop Xpath query field is displayed instead.

 

Mapping

Complete this table to map the columns defined in the schema to the corresponding JSON nodes.

  • Column: The Column cells are automatically filled with the defined schema column names.

  • Json query/JSONPath query: Specify the JSONPath node that holds the desired data. For more information about JSONPath expressions, see http://goessner.net/articles/JsonPath/.

    This column is available only when JsonPath is selected from the Read By list.

  • XPath query: Specify the XPath node that holds the desired data.

    This column is available only when Xpath is selected from the Read By list.

  • Get Nodes: Select this check box to extract the JSON data of all the nodes or select the check box next to a specific node to extract the data of that node.

    This column is available only when Xpath is selected from the Read By list.

Advanced settings

Advanced separator (for number)

Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.).

 

Validate date

Select this check box to check the date format strictly against the input schema.

 

Encoding

Select the encoding from the list or select Custom and define it manually.

Global Variables

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.

Usage in Map/Reduce Jobs

In a Talend Map/Reduce Job, it is used as a start component and requires a transformation component as output link. The other components used along with it must be Map/Reduce components, too. They generate native Map/Reduce code that can be executed directly in Hadoop.

Once a Map/Reduce Job is opened in the workspace, tFileInputJSON as well as the MapReduce family appears in the Palette of the Studio.

For further information about a Talend Map/Reduce Job, see the sections describing how to create, convert and configure a Talend Map/Reduce Job of the Talend Big Data Getting Started Guide.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs, and non Map/Reduce Jobs.

Hadoop Connection

You need to use the Hadoop Configuration tab in the Run view to define the connection to a given Hadoop distribution for the whole Job.

This connection is effective on a per-Job basis.

Prerequisites

The Hadoop distribution must be properly installed, so as to guarantee the interaction with Talend Studio. The following list presents MapR related information for example.

  • Ensure that you have installed the MapR client in the machine where the Studio is, and added the MapR client library to the PATH variable of that machine. According to MapR's documentation, the library or libraries of a MapR client corresponding to each OS version can be found under MAPR_INSTALL\ hadoop\hadoop-VERSION\lib\native. For example, the library for Windows is \lib\native\MapRClient.dll in the MapR client jar file. For further information, see the following link from MapR: http://www.mapr.com/blog/basic-notes-on-configuring-eclipse-as-a-hadoop-development-environment-for-mapr.

    Without adding the specified library or libraries, you may encounter the following error: no MapRClient in java.library.path.

  • Set the -Djava.library.path argument, for example, in the Job Run VM arguments area of the Run/Debug view in the [Preferences] dialog box. This argument provides to the Studio the path to the native library of that MapR client. This allows the subscription-based users to make full use of the Data viewer to view locally in the Studio the data stored in MapR. For further information about how to set this argument, see the section describing how to view data of Talend Big Data Getting Started Guide.

For further information about how to install a Hadoop distribution, see the manuals corresponding to the Hadoop distribution you are using.

Related scenarios

No scenario is available for the Map/Reduce version of this component yet.

tFileInputJSON properties in Spark Batch Jobs

Warning

The streaming version of this component is available in the Palette of the studio on the condition that you have subscribed to Talend Real-time Big Data Platform or Talend Data Fabric.

Component Family

File / Input

 

Basic settings

Define a storage configuration component

Select the configuration component to be used to provide the configuration information for the connection to the target file system such as HDFS or S3.

If you leave this check box clear, the target file system is the local system.

Note that the configuration component to be used must be present in the same Job. For example, if you have dropped a tHDFSConfiguration component in the Job, you can select it to write the result in a given HDFS system.

 

Property type

Either Built-In or Repository.

  

Built-In: No property data stored centrally.

  

Repository: Select the repository file where the properties are stored.

The fields that come after are pre-filled in using the fetched data.

For further information about the File Json node, see the section about setting up a JSON file schema in Talend StudioUser Guide.

 

Schema et Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

  

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

  

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

 

Read by

Select a way of extracting the JSON data in the file.

  • Xpath: Extracts the JSON data based on the XPath query.

  • JsonPath: Extracts the JSON data based on the JSONPath query. Note that it is recommended to read the data by JSONPath in order to gain better performance.

 

Folder/File

Browse to, or enter the path pointing to the data to be used in the file system.

If the path you entered points to a folder, all files stored in that folder will be read.

If the file to be read is a compressed one, enter the file name with its extension; then tFileInputJSON automatically decompresses it at runtime. The supported compression formats and their corresponding extensions are:

  • DEFLATE: *.deflate

  • gzip: *.gz

  • bzip2: *.bz2

  • LZO: *.lzo

Note that you need to ensure you have properly configured the connection in the configuration component you have selected from the configuration component list.

 

Die on error

Select this check box to stop the execution of the Job when an error occurs.

Clear the check box to skip any rows on error and complete the process for error-free rows. When errors are skipped, you can collect the rows on error using a Row > Reject link.

 

Loop Jsonpath query

Enter the path pointing to the node within the JSON field, on which the loop is based.

Note if you have selected Xpath from the Read by drop-down list, the Loop Xpath query field is displayed instead.

 

Mapping

Complete this table to map the columns defined in the schema to the corresponding JSON nodes.

  • Column: The Column cells are automatically filled with the defined schema column names.

  • Json query/JSONPath query: Specify the JSONPath node that holds the desired data. For more information about JSONPath expressions, see http://goessner.net/articles/JsonPath/.

    This column is available only when JsonPath is selected from the Read By list.

  • XPath query: Specify the XPath node that holds the desired data.

    This column is available only when Xpath is selected from the Read By list.

  • Get Nodes: Select this check box to extract the JSON data of all the nodes or select the check box next to a specific node to extract the data of that node.

    This column is available only when Xpath is selected from the Read By list.

Advanced settings

Advanced separator (for number)

Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.).

 

Encoding

You may encounter encoding issues when you process the stored data. In that situation, select this check box to display the Encoding list.

Select the encoding from the list or select Custom and define it manually.

Usage in Spark Batch Jobs

In a Talend Spark Batch Job, it is used as a start component and requires an output link. The other components used along with it must be Spark Batch components, too. They generate native Spark code that can be executed directly in a Spark cluster.

This component, along with the Spark Batch component Palette it belongs to, appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs.

Log4j

If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Spark Connection

You need to use the Spark Configuration tab in the Run view to define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, one and only one file system related component from the Storage family is required in the same Job so that Spark can use this component to connect to the file system to which the jar files dependent on the Job are transferred:

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Spark Batch version of this component yet.

tFileInputJSON properties in Spark Streaming Jobs

Warning

The streaming version of this component is available in the Palette of the studio on the condition that you have subscribed to Talend Real-time Big Data Platform or Talend Data Fabric.

Component Family

File / Input

 

Basic settings

Define a storage configuration component

Select the configuration component to be used to provide the configuration information for the connection to the target file system such as HDFS or S3.

If you leave this check box clear, the target file system is the local system.

Note that the configuration component to be used must be present in the same Job. For example, if you have dropped a tHDFSConfiguration component in the Job, you can select it to write the result in a given HDFS system.

 

Property type

Either Built-In or Repository.

  

Built-In: No property data stored centrally.

  

Repository: Select the repository file where the properties are stored.

The fields that come after are pre-filled in using the fetched data.

For further information about the File Json node, see the section about setting up a JSON file schema in Talend StudioUser Guide.

 

Schema et Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the [Repository Content] window.

  

Built-In: You create and store the schema locally for this component only. Related topic: see Talend Studio User Guide.

  

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. Related topic: see Talend Studio User Guide.

 

Read by

Select a way of extracting the JSON data in the file.

  • Xpath: Extracts the JSON data based on the XPath query.

  • JsonPath: Extracts the JSON data based on the JSONPath query. Note that it is recommended to read the data by JSONPath in order to gain better performance.

 

Folder/File

Browse to, or enter the path pointing to the data to be used in the file system.

If the path you entered points to a folder, all files stored in that folder will be read.

If the file to be read is a compressed one, enter the file name with its extension; then tFileInputJSON automatically decompresses it at runtime. The supported compression formats and their corresponding extensions are:

  • DEFLATE: *.deflate

  • gzip: *.gz

  • bzip2: *.bz2

  • LZO: *.lzo

Note that you need to ensure you have properly configured the connection in the configuration component you have selected from the configuration component list.

 

Die on error

Select this check box to stop the execution of the Job when an error occurs.

 

Loop Jsonpath query

Enter the path pointing to the node within the JSON field, on which the loop is based.

Note if you have selected Xpath from the Read by drop-down list, the Loop Xpath query field is displayed instead.

 

Mapping

Complete this table to map the columns defined in the schema to the corresponding JSON nodes.

  • Column: The Column cells are automatically filled with the defined schema column names.

  • Json query/JSONPath query: Specify the JSONPath node that holds the desired data. For more information about JSONPath expressions, see http://goessner.net/articles/JsonPath/.

    This column is available only when JsonPath is selected from the Read By list.

  • XPath query: Specify the XPath node that holds the desired data.

    This column is available only when Xpath is selected from the Read By list.

  • Get Nodes: Select this check box to extract the JSON data of all the nodes or select the check box next to a specific node to extract the data of that node.

    This column is available only when Xpath is selected from the Read By list.

Advanced settings

Advanced separator (for number)

Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.).

 

Encoding

You may encounter encoding issues when you process the stored data. In that situation, select this check box to display the Encoding list.

Select the encoding from the list or select Custom and define it manually.

Usage in Spark Streaming Jobs

In a Talend Spark Streaming Job, it is used as a start component and requires an output link. The other components used along with it must be Spark Streaming components, too. They generate native Spark code that can be executed directly in a Spark cluster.

This component is only used to provide the lookup flow (the right side of a join operation) to the main flow of a tMap component. In this situation, the lookup model used by this tMap must be Load once.

This component, along with the Spark Streaming component Palette it belongs to, appears only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs.

Log4j

If you are using a subscription-based version of the Studio, the activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User Guide.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Spark Connection

You need to use the Spark Configuration tab in the Run view to define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, one and only one file system related component from the Storage family is required in the same Job so that Spark can use this component to connect to the file system to which the jar files dependent on the Job are transferred:

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Spark Streaming version of this component yet.