Writing and reading data from S3 (Databricks on AWS) - 7.3

Databricks

Version
7.3
Language
English
Product
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Real-Time Big Data Platform
Module
Talend Studio
Content
Design and Development > Designing Jobs > Hadoop distributions > Databricks
Design and Development > Designing Jobs > Serverless > Databricks
Last publication date
2024-02-21

In this scenario, you create a Spark Batch Job using tS3Configuration and the Parquet components to write data on S3 and then read the data from S3.

This scenario applies only to Talend products with Big Data.

For more technologies supported by Talend, see Talend components.

The sample data reads as follows:
01;ychen

This data contains a user name and the ID number distributed to this user.

Note that the sample data is created for demonstration purposes only.