The following instructions show how to read a file on HDFS, process it, and save the results on Amazon S3 using a Big Data Batch - Spark Job.
For more technologies supported by Talend, see Talend components.
Because Spark is not dependent on a specific file system, you will have to specify which file system will be used by your Spark Job.
tHDFSConfiguration is used in this scenario by Spark to connect to the HDFS system where the JAR files dependent on the Job are transferred.
Yarn mode (YARN client or YARN cluster):
When using Google Dataproc, specify a bucket in the Google Storage staging bucket field in the Spark configuration tab.
When using HDInsight, specify the blob to be used for Job deployment in the Windows Azure Storage configuration area in the Spark configuration tab.
- When using Altus, specify the S3 bucket or the Azure Data Lake Storage for Job deployment in the Spark configuration tab.
- When using Qubole, add a tS3Configuration to your Job to write your actual business data in the S3 system with Qubole. Without tS3Configuration, this business data is written in the Qubole HDFS system and destroyed once you shut down your cluster.
When using on-premise distributions, use the configuration component corresponding to the file system your cluster is using. Typically, this system is HDFS and so use tHDFSConfiguration.
If you are using Databricks without any configuration component present in your Job, your business data is written directly in DBFS (Databricks Filesystem).