Tokenizing Japanese text - 7.1

Text standardization

author
Talend Documentation Team
EnrichVersion
7.1
EnrichProdName
Talend Big Data Platform
Talend Data Fabric
Talend Data Management Platform
Talend Data Services Platform
Talend MDM Platform
Talend Real-Time Big Data Platform
task
Data Governance > Third-party systems > Data Quality components > Standardization components > Text standardization components
Data Quality and Preparation > Third-party systems > Data Quality components > Standardization components > Text standardization components
Design and Development > Third-party systems > Data Quality components > Standardization components > Text standardization components
EnrichPlatform
Talend Studio

This scenario applies only to Talend Data Management Platform, Talend Big Data Platform, Talend Real Time Big Data Platform, Talend Data Services Platform, Talend MDM Platform and Talend Data Fabric.

For more technologies supported by Talend, see Talend components.

Using the tJapaneseTokenize component, you can split Japanese text into tokens.

To replicate the example described below, retrieve the tJapaneseTokenize_standard_scenario.zip file from the Downloads tab from the left panel of this help page.

The tJapaneseTokenize_standard_scenario.zip file is composed of:
  • the plain text file inputJapaneseText.txt containing Japanese text, the transcription and the English translation; and
  • the tJapaneseTokenizeJob.zip file containing the Job.