Writing and reading data from Azure Data Lake Storage using Spark (Azure Databricks) - Cloud - 8.0

Azure Data Lake Store

Version
Cloud
8.0
Language
English
Product
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for ESB
Talend Real-Time Big Data Platform
Module
Talend Studio
Content
Data Governance > Third-party systems > Cloud storages > Azure components > Azure Data Lake Store components
Data Quality and Preparation > Third-party systems > Cloud storages > Azure components > Azure Data Lake Store components
Design and Development > Third-party systems > Cloud storages > Azure components > Azure Data Lake Store components

In this scenario, you create a Spark Batch Job using tAzureFSConfiguration and the Parquet components to write data on Azure Data Lake Storage and then read the data from Azure.

This scenario applies only to subscription-based Talend products with Big Data.

For more technologies supported by Talend, see Talend components.

The sample data reads as follows:
01;ychen

This data contains a user name and the ID number distributed to this user.

Note that the sample data is created for demonstration purposes only.