Skip to main content Skip to complementary content

Design the data flow of the Job working with S3 and Databricks on AWS

Procedure

  1. In the Integration perspective of Talend Studio, create an empty Spark Batch Job from the Job Designs node in the Repository tree view.
  2. In the workspace, enter the name of the component to be used and select this component from the list that appears. In this scenario, the components are tS3Configuration, tFixedFlowInput, tFileOutputParquet, tFileInputParquet and tLogRow.
    The tFixedFlowInput component is used to load the sample data into the data flow. In the real-world practice, you could use the File input components, as well as the processing components, to design a sophisticated process to prepare your data to be processed.
  3. Connect tFixedFlowInput to tFileOutputParquet using the Row > Main link.
  4. Connect tFileInputParquet to tLogRow using the Row > Main link.
  5. Connect tFixedFlowInput to tFileInputParquet using the Trigger > OnSubjobOk link.
  6. Leave tS3Configuration alone without any connection.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – let us know how we can improve!