tPubSubOutput properties for Apache Spark Streaming

Google PubSub

author
Talend Documentation Team
EnrichVersion
6.5
EnrichProdName
Talend Real-Time Big Data Platform
Talend Data Fabric
task
Design and Development > Third-party systems > Cloud storages > Google PubSub components
Data Governance > Third-party systems > Cloud storages > Google PubSub components
Data Quality and Preparation > Third-party systems > Cloud storages > Google PubSub components
EnrichPlatform
Talend Studio

These properties are used to configure tPubSubOutput running in the Spark Streaming Job framework.

The Spark Streaming tPubSubOutput component belongs to the Messaging family.

The component in this framework is available in Talend Real Time Big Data Platform and Talend Data Fabric.

Basic settings

Schema and Edit schema

A schema is a row description. It defines the number of fields (columns) to Repository. When you create a Spark Job, avoid the reserved word line when naming the fields.

Note that the schema of this component is read-only. It stores the messages to be published.

Define a Goolge Cloud configuration component

If you are using Dataproc as your Spark cluster, clear this check box.

Otherwise, select this check box to allow the Pub/Sub component to use the Google Cloud configuration information provided by a tGoogleCloudConfiguration component.

Topic name

Enter the name of the topic you want to publish messages to. This topic must already exist.

Topic operation

Select the operation to be performed on the specified topic:
  • None: select this option if the topic to be used already exists.

  • Create if not exists: select this option if the topic to be used does not exist.

Advanced settings

Connection pool

In this area, you configure, for each Spark executor, the connection pool used to control the number of connections that stay open simultaneously. The default values given to the following connection pool parameters are good enough for most use cases.

  • Max total number of connections: enter the maximum number of connections (idle or active) that are allowed to stay open simultaneously.

    The default number is 8. If you enter -1, you allow unlimited number of open connections at the same time.

  • Max waiting time (ms): enter the maximum amount of time at the end of which the response to a demand for using a connection should be returned by the connection pool. By default, it is -1, that is to say, infinite.

  • Min number of idle connections: enter the minimum number of idle connections (connections not used) maintained in the connection pool.

  • Max number of idle connections: enter the maximum number of idle connections (connections not used) maintained in the connection pool.

Evict connections

Select this check box to define criteria to destroy connections in the connection pool. The following fields are displayed once you have selected it.

  • Time between two eviction runs: enter the time interval (in milliseconds) at the end of which the component checks the status of the connections and destroys the idle ones.

  • Min idle time for a connection to be eligible to eviction: enter the time interval (in milliseconds) at the end of which the idle connections are destroyed.

  • Soft min idle time for a connection to be eligible to eviction: this parameter works the same way as Min idle time for a connection to be eligible to eviction but it keeps the minimum number of idle connections, the number you define in the Min number of idle connections field.

Usage

Usage rule

This component is used as an end component and requires an input link.

This component needs a Write component such as tWriteJSONField to define a serializedValue column in the input schema to send serialized data.

Spark Connection

You need to use the Spark Configuration tab in the Run view to define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
  • Yarn mode: when using Google Dataproc, specify a bucket in the Google Storage staging bucket field in the Spark configuration tab; when using other distributions, use a tHDFSConfiguration component to specify the directory.

  • Standalone mode: you need to choose the configuration component depending on the file system you are using, such as tHDFSConfiguration or tS3Configuration.

This connection is effective on a per-Job basis.

PubSub access permissions

When you use Pub/Sub with a Dataproc cluster, ensure that this cluster has the appropriate permissions to access the Pub/Sub service.

To do this, you can create the Dataproc cluster by checking Allow API access to all Google Cloud services in the same project in the advanced options on Google Cloud Platform, or via the command line, assigning the scopes explicitly (the following example is for a low-resource test cluster):
gcloud beta dataproc clusters create CLUSTER_ID \
    --zone europe-west1-b \
    --master-machine-type n1-standard-2 \
    --master-boot-disk-size 50 \
    --num-workers 2 \
    --worker-machine-type n1-standard-2 \
    --worker-boot-disk-size 50 \
    --scopes 'https://www.googleapis.com/auth/cloud-platform' \
    --project PROJECT_ID