Before you begin
You have previously created a connection to the system storing your source data.
Here, an Amazon S3 connection.
You have previously added the dataset holding your source data.
Here, hierarchical data about actors including ID, name, country, etc.
You also have created the connection and the related dataset that will hold the processed data.
Here, a file stored on Amazon S3.
- Click ADD PIPELINE on the Pipelines page. Your new pipeline opens.
- Give the pipeline a meaningful name.
ExampleNormalize Actor Records
- Click ADD SOURCE to open the panel allowing you to select your source data, here a list of actors stored in HDFS.
Select your dataset
and click SELECT in order to add it to the pipeline.
Rename it if needed.
- Click and add a Normalize processor to the pipeline. The Configuration panel opens.
- Give a meaningful name to the processor.
Examplenormalize actors structure
- In the Column to normalize fields, type in Actors, as this column contains the hierarchical records you want to normalize.
- Enable the Is list and Discard the trailing empty strings options to flatten the data (from an array structure to a record structure) in a list and discard empty ones.
- Click SAVE to save your configuration.
- Click the ADD DESTINATION item on the pipeline to open the panel allowing to select the dataset that will hold your normalized data.Rename it if needed.
- (Optional) Look at the preview of the Normalize processor to compare your data before and after the normalizing operation.
- On the top toolbar of Talend Cloud Pipeline Designer, select your run profile in the list (for more information, see Run profiles).
- Click the run icon to run your pipeline.
Your pipeline is being executed, the records are normalized and the output is sent to the target system you have indicated.