Reading the sample data from Azure Data Lake Storage - 7.3

Azure Data Lake Store

Version
7.3
Language
English
Product
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Real-Time Big Data Platform
Module
Talend Studio
Content
Data Governance > Third-party systems > Cloud storages > Azure components > Azure Data Lake Storage Gen2 components
Data Quality and Preparation > Third-party systems > Cloud storages > Azure components > Azure Data Lake Storage Gen2 components
Design and Development > Third-party systems > Cloud storages > Azure components > Azure Data Lake Storage Gen2 components
Last publication date
2023-06-12

Procedure

  1. Double-click tFileInputParquet to open its Component view.

    Example

  2. Select the Define a storage configuration component check box and then select the tAzureFSConfiguration component you configured in the previous steps.
  3. Click the [...] button next to Edit schema to open the schema editor.
  4. Click the [+] button to add the schema columns for output as shown in this image.

    Example

  5. Click OK to validate these changes and accept the propagation prompted by the pop-up dialog box.
  6. In the Folder/File field, enter the name of the folder from which you need to read data. In this scenario, it is sample_user.
  7. Double-click tLogRow to open its Component view and select the Table radio button to present the result in a table.
  8. Press F6 to run this Job.

Results

Once done, you can find your Job on the Job page on the Web UI of your Databricks cluster and then check the execution log of your Job.