Before you begin
You have previously created a connection to the system storing your source data.
You have previously added the dataset holding your source data.
Here, a hierarchical list of customers data including ID, product information such as book title and price, etc that you can find attached to this document (download the aggregate-customers.json file from the Downloads tab in the left panel of this page).
You also have created the connection and the related dataset that will hold the processed data.
Here, a file stored on HDFS.
- Click ADD PIPELINE on the Pipelines page. Your new pipeline opens.
- Give the pipeline a meaningful name.
ExampleAggregate Average Purchase Price
- Click ADD SOURCE to open the panel allowing you to select your source data, here a list of hierarchical customer data about book purchases.
Select your dataset
and click SELECT in order to add it to the pipeline.
Rename it if needed.
- Click and add an Aggregate processor to the pipeline. The configuration panel opens.
- Give a meaningful name to the processor.
Examplecalculate average price
- In the GROUP BY area, click the recycle bin icon next to the empty field to remove it as you want the whole dataset to be aggregated into one single record.
- In the OPERATIONS area:
- Select .product.price in the Field path list and Average in the Operation list as you want to group the average price of all the books purchased by customers.
- Name the generated field (Output field name), avgPrice for example.
- Click SAVE to save your configuration.
You can preview the calculated data after the aggregating operation: the average book price is 13.96 dollars.
- Click the ADD DESTINATION item on the pipeline to open the panel allowing to select the dataset that will hold your output data (HDFS).Rename it if needed.
- On the top toolbar of Talend Cloud Pipeline Designer, select your run profile in the list (for more information, see Run profiles).
- Click the run icon to run your pipeline.
Your pipeline is being executed, the average book price is aggregated in one single record, and the output flow is sent to the target systems you have indicated.