In this scenario, you create a Spark Batch Job using tAzureFSConfiguration and the Parquet components to write data on Azure Data Lake Storage and then read the data from Azure.
This scenario applies only to a subscription-based Talend solution with Big data.
For more technologies supported by Talend, see Talend components.
This data contains a user name and the ID number distributed to this user.
Note that the sample data is created for demonstration purposes only.