Design the data flow of the Job working with Azure and Databricks - 7.2

Azure Data Lake Store

author
Talend Documentation Team
EnrichVersion
7.2
EnrichProdName
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Real-Time Big Data Platform
task
Data Governance > Third-party systems > Cloud storages > Azure components > Azure Data Lake Store components
Data Quality and Preparation > Third-party systems > Cloud storages > Azure components > Azure Data Lake Store components
Design and Development > Third-party systems > Cloud storages > Azure components > Azure Data Lake Store components
EnrichPlatform
Talend Studio

Procedure

  1. In the Integration perspective of the Studio, create an empty Spark Batch Job from the Job Designs node in the Repository tree view.
  2. In the workspace, enter the name of the component to be used and select this component from the list that appears. In this scenario, the components are tAzureFSConfiguration, tFixedFlowInput, tFileOutputParquet, tFileInputParquet and tLogRow.
    The tFixedFlowInput component is used to load the sample data into the data flow. In the real-world practice, you could use the File input components, as well as the processing components, to design a sophisticated process to prepare your data to be processed.
  3. Connect tFixedFlowInput to tFileOutputParquet using the Row > Main link.
  4. Connect tFileInputParquet to tLogRow using the Row > Main link.
  5. Connect tFixedFlowInput to tFileInputParquet using the Trigger > OnSubjobOk link.
  6. Leave tAzureFSConfiguration alone without any connection.