tHMapInput properties for Apache Spark Batch - Cloud - 8.0

Data mapping

Version
Cloud
8.0
Language
English
Product
Talend Big Data Platform
Talend Data Fabric
Talend Data Management Platform
Talend Data Services Platform
Talend MDM Platform
Talend Real-Time Big Data Platform
Module
Talend Studio
Content
Data Governance > Third-party systems > Processing components (Integration) > Data mapping
Data Quality and Preparation > Third-party systems > Processing components (Integration) > Data mapping
Design and Development > Third-party systems > Processing components (Integration) > Data mapping
Last publication date
2024-02-29

These properties are used to configure tHMapInput running in the Spark Batch Job framework.

The Spark Batch tHMapInput component belongs to the Processing family.

This component is available in Talend Platform products with Big Data and in Talend Data Fabric.

Basic settings

Storage

To connect to an HDFS installation, select the Define a storage configuration component check box and then select the name of the component to use from those available in the drop-down list.

This option requires you to have previously configured the connection to the HDFS installation to be used, as described in the documentation for the tHDFSConfiguration component.

If you leave the Define a storage configuration component check box unselected, you can only convert files locally.

Configure Component

Before you configure this component, you must have already added a downstream component and linked it to the tHMapInput component, and retreived the schema from the downstream component.

To configure the component, click the [...] button and, in the Component Configuration window, perform the following actions.
  1. Click the Select button next to the Record structure field and then, in the Select a Structure dialog box that opens, select the map you want to use and then click OK.

    This structure must have been previously created in Talend Data Mapper .

  2. Select the Input Representation to use from the drop-down list.

    Supported input formats are Avro, COBOL, EDI, Flat, IDocs, JSON and XML.

  3. Click Next.

  4. Tell the component where each new record begins. In order for you to be able to do so, you need to fully understand the structure of your data.

    Exactly how you do this varies depending on the input representation being used, and you will be presented with one of the following options.

    1. Select an appropriate record delimiter for your data. Note that you must specify this value without quotes.

      • Separator lets you specify a separator indicator, such as \n, to identify a new line.

        Supported indicators are \n for a Unix-type new line, \r\n for Windows and \r for Mac, and \t for tab characters.

      • Start/End with lets you specify the initial characters that indicate a new record, such as <root, or the characters that indicate where a record ends.

        Start with also supports new lines, \n for a Unix-type new line, \r\n for Windows and \r for Mac, and \t for tab characters.

        Select the Regular Expression check box if you to wish to enter a regular expression to match the start of a record. When you select XML or JSON, this check box is selected by default and a pre-configured regular expression is provided.

      • Sample File: To test the signature with a sample file, click the [...] button, browse to the file you want to use as a sample, click Open, and then click Run to test your sample.

        Testing the signature lets you check that the total number of records and their minimum and maximum length corresponds to what you expect based on your knowledge of the data. This step assumes you have a local subset of your data to use as a sample.

      • Click Finish.

    2. If your input representation is COBOL or Flat with positional and/or binary encoding properties, define the signature for the input record structure:
      • Input Record root corresponds to the root element in your input record.
      • Minimum Record Size corresponds to the size in bytes of the smallest record. If you set this value too low, you may encounter performance issues, since the component will perform more checks than necessary when looking for a new record.

      • Maximum Record Size corresponds to the size in bytes of the largest record, and is used to determine how much memory is allocated to read the input.

      • Sample from Workspace or Sample from File System: To test the signature with a sample file, click the [...] button, and then browse to the file you want to use as a sample.

        Testing the signature lets you check that the total number of records and their minimum and maximum length corresponds to what you expect based on your knowledge of the data. This step assumes you have a local subset of your data to use as a sample.

      • Footer Size corresponds to the size in bytes of the footer, if any. At runtime, the footer will be ignored rather than being mistakenly included in the last record. Leave this field empty if there is no footer.

      • Click the Next button to open the Signature Parameters window, select the fields that define the signature of your record input structure (that is, to identify where a new record begins), update the Operation and Value columns as appropriate, and then click Next.

      • In the Record Signature Test window that opens, check that your Records are correctly delineated by scrolling through them with the Back and Next buttons and performing a visual check, and then click Finish.

  5. Map the elements from the input structure to the output structure in the new map that opens, and then press Ctrl+S to save your map.

    For more information on creating maps, see Creating standard maps.

Input

Click the [...] button to define the path to where the input file is stored.

Open Map Editor

Click the [...] button to open the Structure Generate/Select wizard.

You can first select the type of map to create:
  • Standard Map: Map that perform mappings using functions based on xQuery
  • DSQL Map: Map that performs mappings using Data Shaping Query Language.
You can select the Don't ask me again check box to save this preference. For more information about these map types, see Working with maps.
Note: This option is available only if you have installed the R2023-10 Studio monthly update or a later one delivered by Talend. For more information, check with your administrator.

Then you can either have the hierarchical mapper structure generated automatically based on the schema, or select an existing hierarchical mapper structure. You must do this for both the input and output sides of your Map. The following lists the options for the output structure:

  • Generate hierarchical mapper structure based on the schema option: When you connect multiple output connections to the tHMap, the page displays a confirmation message that informs you that the mapper structures are generated based on the output connections.
  • Select an existing hierarchical mapper structure option: You can connect multiple outputs that are payload-based connections to the tHMap. If there is a single payload-type connection, you can select the Allow support for multiple output connections check box. The generated output map inherits from the existing payload structure.

If Talend Studio detects multiple output connections available, the window displays both output structure options without the support for multiple output connections check boxes.

If neither input nor output connection exists, the Structure Selection page is displayed.

Synchronize map with schema connections

Select this check box if you want to automatically regenerate your map's input and output structures after one of the following changes:
  • Connection metadata change
  • Input or output connection added
  • Input or output connection removed
No changes are detected when a connection is activated or deactivated.
If this check box is selected, the map is automatically synchronized when opened from the component after a change. If not, a dialog appears to ask whether you want to synchronize.
Note: For structures with multiple connections, the map can only be synchronized if the structures have the same form as the ones generated by the component configuration wizard. For example, flattening maps with multiple outputs cannot be synchronized automatically.

Die on error

This check box is selected by default.

Clear the check box to skip any rows on error and complete the process for error-free rows.

If you opt to clear the check box, you can perform any of these options:
  • Connect the tHMapInput component to an output component, for example tAvroOutput, using a Row > Rejects connection. In the output component, ensure that you add a fixed metadata with the following columns:
    • inputRecord: contains the rejected input record during the transformation.
    • recordId: refers to the record identifier. For a text or binary input, the recordId specifies the start offset of the record in the input file. For an AVRO input, the recordId specifies the timestamp when the input was processed.
    • errorMessage: contains the transformation status with details of the cause of the transformation error.
  • You can retrieve the rejected records in a file. One of these mechanisms triggers this feature: (1) a context variable (talend_transform_reject_file_path) and (2) a system variable set in the Advanced Job parameters (spark.hadoop.talend.transform.reject.file.path).

    When you set the file path on the Hadoop Distributed File System (HDFS), no further configurations are needed. When you set the file on Amazon S3 or any other Hadoop-compatible file systems, add the associated Spark advanced configuration parameter.

    In case of errors at runtime, tHMapFile checks if one of the mechanisms exists and, if so, appends the rejected record to the designated file. The reject file content includes the concatenation of the rejected records without any additional metadata.

    If the file system you use does not support appending to a file, a separate file is created for each rejection. The file uses the provided file path as the prefix and adds a suffix that is the offset of the input file and the size of the rejected record.

Note: Any errors while trying to store the reject are logged and the processing continues.

Usage

Usage rule

This component is used with a tHDFSConfiguration component which defines the connection to the HDFS storage.

It is an input component and requires an output flow.

Usage with Talend Runtime If you want to deploy a Job or Route containing a data mapping component with Talend Runtime, you first need to install the Talend Data Mapper feature. For more information, see Using Talend Data Mapper with Talend Runtime.