tPubSubInput properties for Apache Spark Streaming - 7.3

Google PubSub

Version
7.3
Language
English
Product
Talend Data Fabric
Talend Real-Time Big Data Platform
Module
Talend Studio
Content
Data Governance > Third-party systems > Cloud storages > Google components > Google PubSub components
Data Quality and Preparation > Third-party systems > Cloud storages > Google components > Google PubSub components
Design and Development > Third-party systems > Cloud storages > Google components > Google PubSub components
Last publication date
2024-02-21

These properties are used to configure tPubSubInput running in the Spark Streaming Job framework.

The Spark Streaming tPubSubInput component belongs to the Messaging family.

If you are using Dataproc 1.4 and onwards as your Spark cluster, make sure to select the Allow API access to all Google Cloud services in the same project at the cluster creation on Google Cloud Platform to be able to run PubService.

This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.

Basic settings

Define a Google Cloud configuration component

If you are using Dataproc as your Spark cluster, clear this check box.

Otherwise, select this check box to allow the Pub/Sub component to use the Google Cloud configuration information provided by a tGoogleCloudConfiguration component.

Schema and Edit schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields.

Note that the schema of this component is read-only. It stores the message body sent from the message producer.

Output type

Select the type of the data to be sent to the next component.

Typically, using String is recommended, because tPubSubInput can automatically translate the PubSub byte[] messages into strings to be processed by the Job. However, in case that the format of the messages is not known to tPubSubInput, such as Protobuf, you can select byte and then use a Custom code component such as tJavaRow to deserialize the messages into strings so that the other components of the same Job can process these messages.

Topic name

Enter the name of topic from which you want to consume the messages.

Subscription name

Enter the name of the subscription that needs to consume the specified topic.

If the subscription exists, it must be connected to the given topic; if the subscription does not exist, it is created and connected to the given topic at runtime.

Advanced settings

Storage level

From the Storage level drop-down list that is displayed, select how the cached RDDs are stored, such as in memory only or in memory and on disk.

For further information about each of the storage level, see https://spark.apache.org/docs/latest/programming-guide.html#rdd-persistence.

Usage

Usage rule

This component is used as a start component and requires an output link.

Spark Connection

In the Spark Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
  • Yarn mode (Yarn client or Yarn cluster):
    • When using Google Dataproc, specify a bucket in the Google Storage staging bucket field in the Spark configuration tab.

    • When using HDInsight, specify the blob to be used for Job deployment in the Windows Azure Storage configuration area in the Spark configuration tab.

    • When using Altus, specify the S3 bucket or the Azure Data Lake Storage for Job deployment in the Spark configuration tab.
    • When using Qubole, add a tS3Configuration to your Job to write your actual business data in the S3 system with Qubole. Without tS3Configuration, this business data is written in the Qubole HDFS system and destroyed once you shut down your cluster.
    • When using on-premises distributions, use the configuration component corresponding to the file system your cluster is using. Typically, this system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the configuration component corresponding to the file system your cluster is using, such as tHDFSConfiguration Apache Spark Batch or tS3Configuration Apache Spark Batch.

    If you are using Databricks without any configuration component present in your Job, your business data is written directly in DBFS (Databricks Filesystem).

This connection is effective on a per-Job basis.

PubSub access permissions

When you use Pub/Sub with a Dataproc cluster, ensure that this cluster has the appropriate permissions to access the Pub/Sub service.

To do this, you can create the Dataproc cluster by checking Allow API access to all Google Cloud services in the same project in the advanced options on Google Cloud Platform, or via the command line, assigning the scopes explicitly (the following example is for a low-resource test cluster):
gcloud beta dataproc clusters create CLUSTER_ID \
    --zone europe-west1-b \
    --master-machine-type n1-standard-2 \
    --master-boot-disk-size 50 \
    --num-workers 2 \
    --worker-machine-type n1-standard-2 \
    --worker-boot-disk-size 50 \
    --scopes 'https://www.googleapis.com/auth/cloud-platform' \
    --project PROJECT_ID