These properties are used to configure tCollectAndCheck running in the Spark Batch Job framework.
The Spark Batch tCollectAndCheck component belongs to the Technical family.
The component in this framework is available in all subscription-based Talend products with Big Data and Talend Data Fabric.
|Select this check box to retrieve connection
information and credentials from a configuration component. You must select
this check box for the following type of input data to be checked:
In the drop-down list that appears, select the configuration component from which you want Spark to use the configuration details to connect to the database. For example, if you want to check Snowflake data, you have to select the tSnowflakeConfiguration component.
Note: If you want to retrieve data from S3, you do not have to use tS3Configuration, you only have to enter the full path of the file in the Path or table name field from the Basic settings view.
Type of input
|Select the type of input data to be checked from the drop-down list.
Path or table name
|Enter the path to the file or the table to be checked in double quotation marks.
Enter a character, a string, or a regular expression to separate fields for the transferred data.
The separator used to identify the end of a row.
|Use context variable
If you have already created the context variable representing the reference file to be used, select this check box and enter this variable in the Variable name field that is displayed.
Then syntax to call a variable is context.VariableName.
For more information about variables, see Using contexts and variables.
If you do not want to use context variables to represent the reference data to be used, enter this reference data directly in this field.
Due to the migration of the component from RDD to dataset, the pattern for a date can only be yyyy-MM-dd, and the pattern for a timestamp can only be yyyy-MM-dd HH:mm:ss.
This applies for Spark 2.1 or a higher version.
Keep the order from the reference
If the RDDs to be checked are sorted, select this check box to keep your reference data ordered.
When the reference is empty, expect no incoming value
By default, this check box is clear, meaning that when an field in the reference data is empty, the test expects an equally empty field in the incoming datasets being verified in order to validate the test result.
If you want the test to expect no value when the reference is empty, select this check box.
This component is used as an end component and requires an input link.
This component is added automatically to a test case being created to show the test result in the console of the Run view.
In the Spark Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis.