tCassandraInput properties for Apache Spark Batch - Cloud - 8.0

Cassandra

Version
Cloud
8.0
Language
English
Product
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Real-Time Big Data Platform
Module
Talend Studio
Content
Data Governance > Third-party systems > NoSQL components > Cassandra components
Data Quality and Preparation > Third-party systems > NoSQL components > Cassandra components
Design and Development > Third-party systems > NoSQL components > Cassandra components
Last publication date
2024-02-20

These properties are used to configure tCassandraInput running in the Spark Batch Job framework.

The Spark Batch tCassandraInput component belongs to the Databases family.

The component in this framework is available in all subscription-based Talend products with Big Data and Talend Data Fabric.

Basic settings

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion.

    If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the Repository Content window.

The schema of this component does not support the Object type and the List type.

Keyspace

Type in the name of the keyspace from which you want to read data.

Column family

Type in the name of the column family from which you want to read data.

Selected column function

Select the columns about which you need to retrieve the TTL (time to live) or the writeTime property.

The TTL property determines the time for records in a column to expire; the writeTime property indicates the time when a record was created.

For further information about these properties, see Datastax's documentation for Cassandra CQL.

Filter function

Define the filters you need to use to select the records to be processed.

The component generates the WHERE ALLOW FILTERING clause using the filters you put and thus this filter function is subject to the limit of this Cassandra clause.

Order by clustering column

Select how you need to sort the retrieved records. You can select NONE so as not to sort the data.

Use limit

Select this check box to display the Limit per partition field, in which you enter the number of the rows to be retrieved starting from the first row.

Usage

Usage rule

This component is used as a start component and requires an output link.

This component should use one and only one tCassandraConfiguration component present in the same Job to connect to Cassandra. More than one tCassandraConfiguration components present in the same Job fail the execution of the Job.

This component, along with the Spark Batch component Palette it belongs to, appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs.

Spark Connection

In the Spark Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
  • Yarn mode (Yarn client or Yarn cluster):
    • When using Google Dataproc, specify a bucket in the Google Storage staging bucket field in the Spark configuration tab.

    • When using HDInsight, specify the blob to be used for Job deployment in the Windows Azure Storage configuration area in the Spark configuration tab.

    • When using Altus, specify the S3 bucket or the Azure Data Lake Storage for Job deployment in the Spark configuration tab.
    • When using on-premises distributions, use the configuration component corresponding to the file system your cluster is using. Typically, this system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the configuration component corresponding to the file system your cluster is using, such as tHDFSConfiguration Apache Spark Batch or tS3Configuration Apache Spark Batch.

    If you are using Databricks without any configuration component present in your Job, your business data is written directly in DBFS (Databricks Filesystem).

This connection is effective on a per-Job basis.