Normalizes the incoming data in a separate XML or JSON data flow to separate or standardize the rule-compliant data from the non-compliant data.
tStandardizeRow tokenizes the data flow it has received from the preceding component and applies user-defined parser rules to analyze the data. Based on this analysis, this component normalizes and writes analyzed data in a separate data flow and tags them using the user-defined rule names. It does not make any changes on your raw data.
The standardization option adds a supplementary column to the output flow where the normalized data are then standardized.
The Java library ANTLR is used to parse and tokenize the incoming data. For further information about ANTLR, see the site
In local mode, Apache Spark 2.4.0 and later versions are supported.
- With the installer: /addons/scripts/Lucene_Migration_Tool/README.md.
- With no installer: in the license email, click the link in Migration tool for Lucene Indexes from version 4 to version 8.
This component is not shipped with your Talend Studio by default. You need to install it using the Feature Manager. For more information, see Installing features using the Feature Manager.
For more technologies supported by Talend, see Talend components.
Depending on the Talend product you are using, this component can be used in one, some or all of the following Job frameworks:
Standard : see tStandardizeRow Standard properties.
The component in this framework is available in Talend Data Management Platform, Talend Big Data Platform, Talend Real Time Big Data Platform, Talend Data Services Platform, and in Talend Data Fabric.
Spark Batch: see tStandardizeRow properties for Apache Spark Batch.
The component in this framework is available in all Talend Platform products with Big Data and in Talend Data Fabric.
Spark Streaming: see tStandardizeRow properties for Apache Spark Streaming.
This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.