tHMapFile - 6.3

Talend Components Reference Guide

EnrichVersion
6.3
EnrichProdName
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for Data Quality
Talend Open Studio for ESB
Talend Open Studio for MDM
Talend Real-Time Big Data Platform
task
Data Governance
Data Quality and Preparation
Design and Development
EnrichPlatform
Talend Studio

Warning

This component is available in the Palette of the Studio only if you have subscribed to one of the Talend solutions with Big Data.

Function

tHMapFile transforms data from a single source, in a Spark environment.

Purpose

tHMapFile runs a Talend Data Mapper map where input and output structures may differ, as a Spark batch execution.

tHMapFile properties in Spark Batch Jobs

Component family

Processing

 

Basic settings

Storage

To connect to an HDFS installation, select the Define a storage configuration component check box and then select the name of the component to use from those available in the drop-down list.

This option requires you to have previously configured the connection to the HDFS installation to be used, as described in the documentation for the tHDFSConfiguration component.

If you leave the Define a storage configuration component check box unselected, you can only convert files locally.

 

Configure Component

To configure the component, click the [...] button and, in the [Component Configuration] window, perform the following actions.

  1. Click the Select button next to the Record Map field and then, in the [Select a Map] dialog box that opens, select the map you want to use and then click OK.

    This map must have been previously created in Talend Data Mapper.

    Note that the input and output representations are those defined in the map, and cannot be changed in the component.

  2. Click Next.

  3. Tell the component where each new record begins. In order for you to be able to do so, you need to fully understand the structure of your data.

    Exactly how you do this varies depending on the input representation being used, and you will be presented with one of the following options.

    1. Select an appropriate record delimiter for your data. Note that you must specify this value without quotes.

      • Separator lets you specify a separator indicator, such as \n, to identify a new line.

        Supported indicators are \n for a Unix-type new line, \r\n for Windows and \r for Mac, and \t for tab characters.

      • Start with lets you specify the initial characters that indicate a new record, such as <root.

        Start with also supports new lines, \n for a Unix-type new line, \r\n for Windows and \r for Mac, and \t for tab characters.

    2. If your input representation is COBOL, define the signature for the input record structure:

      • Min Size corresponds to the size in bytes of the smallest record. If you set this value too low, you may encounter performance issues, since the component will perform more checks than necessary when looking for a new record.

      • Max Size corresponds to the size in bytes of the largest record, and is used to determine how much memory is allocated to read the input.

      • Footer Size corresponds to the size in bytes of the footer, if any. At runtime, the footer will be ignored rather than being mistakenly included in the last record. Leave this field empty if there is no footer.

      • Click the Configure button to open the [Edit Signature] window, select the fields that define the signature of your record input structure (that is, to identify where a new record begins), update the Operation and Value columns as appropriate, and then click OK to return to the [Component Configuration] window.

  4. To test the signature with a sample file, click the [...] button, browse to the file you want to use as a sample, and then click Open.

    Testing the signature lets you check that the total number of records and their minimum and maximum length corresponds to what you expect based on your knowledge of the data. This step assumes you have a local subset of your data to use as a sample.

  5. Click Run to test your sample.

  6. Click Finish.

 

Input

Click the [...] button to define the path to where the input file is stored.

 

Output

Click the [...] button to define the path to where the output files will be stored.

 

Action

From the drop-down list, select:

  • Create if you want the mapping process to create a new file.

  • Overwrite if you want the mapping process to overwrite an existing file.

 

Open Map Editor

Click the [...] button to open the map for editing in the Map Editor of Talend Data Mapper.

For more information, see Talend Data Mapper User Guide.

Usage

This component is used with a tHDFSConfiguration component which defines the connection to the HDFS storage, or as a standalone component for mapping local files only.

Scenario: Transforming data in a Spark environment

The following scenario creates a two-component Job that transforms data in a Spark environment using a map that was previously created in Talend Data Mapper.

Downloading the input files

  1. Download the input files for this scenario here.

    The thmapfile_transform_scenario.zip file contains two files:

    • gdelt.json is a JSON file built using data from the GDELT project: http://gdeltproject.org

    • gdelt-onerec.json is a subset of gdelt.json containing just one record, and is used as a sample document for creating the structure in Talend Data Mapper.

  2. Save the thmapfile_scenario.zip file on your local machine and unpack the .zip file.

Creating the input and output structures

  1. In the Integration perspective, in the Repository tree view, expand Metadata > Hierarchical Mapper, right click Structures, and then click New > Structure.

  2. In the New Structure dialog box that opens, select Import a structure definion, and then click Next.

  3. Select JSON Sample Document, and then click Next.

  4. Select Local file, browse to the location on your local file system where you saved the source files, import gdelt-onerec.json as your sample document, and then click Next.

  5. Give your new structure a name, gdelt-onerec in this example, click Next, and then click Finish.

Creating the map

  1. In the Repository tree view, expand Metadata > Hierarchical Mapper, right click Maps, and then click New > Map.

  2. In the Select Type of New Map dialog box that opens, select Standard Map and then click Next.

  3. Give your new map a name, json2xml in this example, and then click Finish.

  4. Drag the gdelt-onerec structure you created earlier into both the Input and Output sides of the map.

  5. On the Output side of the map, change the representation used from JSON to XML by double-clicking Output (JSON) and selecting the XML output representation.

  6. Drag the Root element from the Input side of the map to the Root element on the Output side. This maps each element from the Input side with its corresponding element on the Outside side, which is a very simple map just for testing purposes.

  7. Press Ctrl+S to save your map.

Adding the components

  1. In the Integration perspective, create a new Job and call it thmapfile_transform.

  2. Click any point in the design workspace, start typing tHDFSConfiguration, and then click the name of the component when it appears in the list proposed in order to select it.

    Note that for testing purposes, you can also perform this scenario locally. In that case, do not add the tHDSFConfiguration component and skip the Configuring the connection to the file system used by Spark section below.

  3. Do the same to add a tHMapFile component, but do not link the two components.

Configuring the connection to the file system to be used by Spark

  1. Double-click tHDFSConfiguration to open its Component view.

  2. In the Version area, select the Hadoop distribution you need to connect to and its version.

  3. In the NameNode URI field, enter the location of the machine hosting the NameNode service of the cluster.

  4. In the Username field, enter the authentication information used to connect to the HDFS system to be used.

Defining the properties of tHMapFile

  1. In the Job, select the tHMapFile component to define its properties.

  2. Select the Define a storage configuration component check box and then select the name of the component to use from those available in the drop-down list, tHDFSConfiguration_1 in this example.

    Note that if you leave the Define a storage configuration component check box unselected, you can only transform files locally.

  3. Click the [...] button and, in the [Component Configuration] window, click the Select button next to the Record Map field.

  4. In the [Select a Map] dialog box that opens, select the map you want to use and then click OK. In this example, use the json2xml map you just created.

  5. Click Next.

  6. Select an appropriate record delimitor for your data that tells the component where each new record begins.

    In this example, each record is on a new line, so select Separator and specify the newline character, \n in this example.

  7. Click Finish.

  8. Click the [...] button next to the Input field to define the path to the input file, /talend/input/gdelt.json in this example.

  9. Click the [...] button next to the Output field to define the path to where the output file is to be stored, /talend/output in this example.

  10. Leave the other settings unchanged.

Saving and executing the Job

  1. Press Ctrl+S to save your Job.

  2. In the Run tab, click Run to execute the Job.

  3. Browse to the location on your file system where the output files are stored to check that the transformation was performed successfully.