tAzureFSConfiguration properties for Apache Spark Batch - 7.0

Azure Data Lake Store

author
Talend Documentation Team
EnrichVersion
7.0
EnrichProdName
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Real-Time Big Data Platform
task
Data Governance > Third-party systems > Cloud storages > Azure components > Azure Data Lake Store components
Data Quality and Preparation > Third-party systems > Cloud storages > Azure components > Azure Data Lake Store components
Design and Development > Third-party systems > Cloud storages > Azure components > Azure Data Lake Store components
EnrichPlatform
Talend Studio

These properties are used to configure tAzureFSConfiguration running in the Spark Batch Job framework.

The Spark Batch tAzureFSConfiguration component belongs to the Storage family.

The component in this framework is available in all subscription-based Talend products with Big Data and Talend Data Fabric.

Basic settings

Azure FileSystem

Select the file system to be used. Then the parameters to be defined are displayed accordingly.

This component is designed to store your actual user data or business data in a Data Lake Store system and it is not compatible with a Data Lake Store that is defined as primary storage in HDInsight. For this reason, if you are using this component with HDInsight, then when you launch your HDInsight, always set Blob storage, and do not set Data Lake Store, as primary storage.

When you use this component with Azure Blob Storage:

Blob storage account

Enter the name of the storage account you need to access. A storage account name can be found in the Storage accounts dashboard of the Microsoft Azure Storage system to be used. Ensure that the administrator of the system has granted you the appropriate access permissions to this storage account.

Account key

Enter the key associated with the storage account you need to access. Two keys are available for each account and by default, either of them can be used for this access.

Container

Enter the name of the blob container you need to use.

When you use this component with Azure Datalake Storage:

Data Lake Store account

Enter the name of the Data Lake Store account you need to access. Ensure that the administrator of the system has granted you the appropriate access permissions to this account.

Client ID and Client key

Enter the authentication ID and the authentication key generated upon the registration of the application that the current Job you are developing uses to access Azure Data Lake Store.

Ensure that the application to be used has appropriate permissions to access Azure Data Lake. You can check this on the Required permissions view of this application on Azure. For further information, see Azure documentation Assign the Azure AD application to the Azure Data Lake Store account file or folder.

Token endpoint

Copy-paste the OAuth 2.0 token endpoint that you can obtain from the Endpoints list accessible on the App registrations page.

Global Variables

Global Variables

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.

Usage

Usage rule

This component is used standalone in a Subjob to provide connection configuration to your Azure file system for the whole Job.

tAzureFSConfiguration does not support SSL access to Google Cloud Dataproc V1.1.

Spark Connection

In the Spark Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
  • Yarn mode (Yarn client or Yarn cluster):
    • When using Google Dataproc, specify a bucket in the Google Storage staging bucket field in the Spark configuration tab.

    • When using HDInsight, specify the blob to be used for Job deployment in the Windows Azure Storage configuration area in the Spark configuration tab.

    • When using Altus, specify the S3 bucket or the Azure Data Lake store (technical preview) for Job deployment in the Spark configuration tab.
    • When using other distributions, use the configuration component corresponding to the file system your cluster is using. Typically, this system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the configuration component corresponding to the file system your cluster is using, such as tHDFSConfiguration or tS3Configuration.

This connection is effective on a per-Job basis.