Setting up the Job - Cloud - 8.0

Java custom code

Talend Big Data
Talend Big Data Platform
Talend Cloud
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for Data Quality
Talend Open Studio for ESB
Talend Real-Time Big Data Platform
Talend Studio
ジョブデザインと開発 > サードパーティーシステム > カスタムコードコンポーネント > Javaカスタムコードコンポーネント
データガバナンス > サードパーティーシステム > カスタムコードコンポーネント > Javaカスタムコードコンポーネント
データクオリティとプレパレーション > サードパーティーシステム > カスタムコードコンポーネント > Javaカスタムコードコンポーネント
Last publication date

About this task

  • This procedure is specific to ADLS Databricks Gen2.
  • You can create this Job in the Big Data Batch or Big Data Streaming node.


  1. Drop the following components from the Palette onto the design workspace: tJava and tAzureFSConfiguration.
  2. Go to your Databricks account.
  3. On the Configuration tab of your Databricks cluster page, expand the Advanced options.
  4. In the Spark tab, add the following Spark properties:<storage_account> <key>

    This key is associated with the storage account to be used. You can find it in the Access keys blade of this storage account. Two keys are available for each account and by default, either of them can be used for this access.

    Ensure that the account to be used has the appropriate read/write rights and permissions.