These properties are used to configure tSnowflakeConfiguration running in the Spark Batch Job framework.
The Spark Batch tSnowflakeConfiguration component belongs to the Databases family.
The component in this framework is available in all subscription-based Talend products with Big Data and Talend Data Fabric.
Basic settings
Account |
In the Account field, enter, in double quotation marks, the account name that has been assigned to you by Snowflake. |
Region |
Select an AWS or Azure region from the drop-down list. |
Username and Password |
Enter, in double quotation marks, your authentication information to log in Snowflake. |
Database |
Enter, in double quotation marks, the name of the Snowflake database to be used. This name is case-sensitive and is normally upper case in Snowflake. |
Database Schema |
Enter, within double quotation marks, the name of the database schema to be used. This name is case-sensitive and is normally upper case in Snowflake. |
Warehouse |
Enter, in double quotation marks, the name of the Snowflake warehouse to be used. This name is case-sensitive and is normally upper case in Snowflake. |
Advanced settings
Use Custom Region | Select this check box to use the customized Snowflake region. |
Custom Region | Enter, within double quotation marks, the name of the region to be used. This name is case-sensitive and is normally upper case in Snowflake. |
Usage
Usage rule |
This component is used with no need to be connected to other components. The configuration in a tSnowflakeConfiguration component applies only on the Snowflake related components that use this configuration and that are in the same Job. |
Spark Connection |
In the Spark
Configuration tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |