Using the tDataprepRun component to apply a preparation to a data sample in an Apache Spark Streaming Job - 6.4

Data Preparation

author
Talend Documentation Team
EnrichVersion
6.4
EnrichProdName
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Real-Time Big Data Platform
task
Data Governance > Third-party systems > Data Preparation components
Data Quality and Preparation > Third-party systems > Data Preparation components
Design and Development > Third-party systems > Data Preparation components
EnrichPlatform
Talend Data Preparation
Talend Studio

This scenario applies only to Talend Real Time Big Data Platform and Talend Data Fabric.

For more technologies supported by Talend, see Talend components.

The tDataprepRun component allows you to reuse an existing preparation made in Talend Data Preparation, directly in a Big Data Job. In other words, you can operationalize the process of applying a preparation to input data with the same model.

The following scenario creates a simple Job that :

  • Reads a small sample of customer data,
  • applies an existing preparation on this data,
  • shows the result of the execution in the console.

This assumes that a preparation has been created beforehand, on a dataset with the same schema as your input data for the Job. In this case, the existing preparation is called datapreprun_spark. This simple preparation puts the customer last names into upper case and applies a filter to isolate the customers from California, Texas and Florida.

The sample data reads as follows:
James;Butt;California
Daniel;Fox;Connecticut
Donna;Coleman;Alabama
Thomas;Webb;Illinois
William;Wells;Florida
Ann;Bradley;California
Sean;Wagner;Florida
Elizabeth;Hall;Minnesota
Kenneth;Jacobs;Florida
Kathleen;Crawford;Texas
Antonio;Reynolds;California
Pamela;Bailey;Texas
Patricia;Knight;Texas
Todd;Lane;New Jersey
Dorothy;Patterson;Virginia
Note: The sample data is created for demonstration purposes only.

Prerequisite: ensure that the Spark cluster has been properly installed and is running.