Skip to main content Skip to complementary content

Linking the components to design the flow of Delta Lake data

Drop and link the components to be used to read and process your Delta Lake data.

Procedure

  1. In the Integration perspective of the Studio, create an empty Spark Batch Job from the Job Designs node in the Repository tree view.
  2. In the workspace, enter the name of the component to be used and select this component from the list that appears. In this scenario, the components are tS3Configuration (labeled s3_flights), two tDeltaLakeInput components (labeled flights_latest_version and flights_first_version, respectively), two tAggregateRow components (labeled count_per_flights), two tPartition components (labeled repart), one tMap and one tFileOutputDelimited.
  3. Connect these components using the Row > Main link as the image above presented.
  4. Leave the tS3Configuration component alone without any connection.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – let us know how we can improve!