tJapaneseTokenize - Cloud - 8.0

Text standardization

Version
Cloud
8.0
Language
English
Product
Talend Big Data Platform
Talend Data Fabric
Talend Data Management Platform
Talend Data Services Platform
Talend MDM Platform
Talend Real-Time Big Data Platform
Module
Talend Studio
Content
Data Governance > Third-party systems > Data Quality components > Standardization components > Text standardization components
Data Quality and Preparation > Third-party systems > Data Quality components > Standardization components > Text standardization components
Design and Development > Third-party systems > Data Quality components > Standardization components > Text standardization components
Last publication date
2024-02-20

Splits Japanese text into tokens.

Tokenization is an important pre-processing step and prepares text data for subsequent analysis, transliteration, text mining or natural language processing tasks.

Unlike English or French, there are no spaces to mark word boundaries in Japanese. Splitting Japanese text into tokens is then more challenging.

Based on the IPADIC dictionary, tJapaneseTokenize deduces where word boundaries exist and adds a space to separate tokens.

The IPADIC dictionary was developed by the Information-Technology Promotion Agency of Japan (IPA). This dictionary is based on the IPA corpus and is the most widely used dictionary for Japanese tokenization.

In local mode, Apache Spark 2.4.0 and later versions are supported.

This component is not shipped with your Talend Studio by default. You need to install it using the Feature Manager. For more information, see Installing features using the Feature Manager.

For more technologies supported by Talend, see Talend components.