tKinesisOutput properties for Apache Spark Streaming - 7.3

Kinesis

Version
7.3
Language
English
Product
Talend Data Fabric
Talend Real-Time Big Data Platform
Module
Talend Studio
Content
Data Governance > Third-party systems > Messaging components (Integration) > Kinesis components
Data Quality and Preparation > Third-party systems > Messaging components (Integration) > Kinesis components
Design and Development > Third-party systems > Messaging components (Integration) > Kinesis components
Last publication date
2024-02-21

These properties are used to configure tKinesisOutput running in the Spark Streaming Job framework.

The Spark Streaming tKinesisOutput component belongs to the Messaging family.

The streaming version of this component is available in Talend Real Time Big Data Platform and in Talend Data Fabric.

Basic settings

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields.

The schema of this component is read-only. You can click Edit schema to view the schema.

The read-only serializedValue column is used to carry the body of the message to be added to Kinesis. Note that you must use a Write component such as tWriteJSONField to define a same serializedValue column in the input schema in order to send serialized data to this read-only column.

The other columns are automatically retrieved from the schema of its preceding component. They are added as header to the message to be outputted.

Access key

Enter the access key ID that uniquely identifies an AWS Account. For further information about how to get your Access Key and Secret Key, see Getting Your AWS Access Keys.

Secret key

Enter the secret access key, constituting the security credentials in combination with the access Key.

To enter the password, click the [...] button next to the password field, and then in the pop-up dialog box enter the password between double quotes and click OK to save the settings.

Stream name

Enter the name of the Kinesis stream you want to add data to.

Endpoint URL

Enter the endpoint of the Kinesis service to be used. For example, https://kinesis.us-east-1.amazonaws.com. More valid Kinesis endpoint URLs can be found at http://docs.aws.amazon.com/general/latest/gr/rande.html#ak_region.

Number of shard

Enter the number of partitions (shards in terms of Kinesis) to be created in the target Kinesis stream.

Advanced settings

Connection pool

In this area, you configure, for each Spark executor, the connection pool used to control the number of connections that stay open simultaneously. The default values given to the following connection pool parameters are good enough for most use cases.

  • Max total number of connections: enter the maximum number of connections (idle or active) that are allowed to stay open simultaneously.

    The default number is 8. If you enter -1, you allow unlimited number of open connections at the same time.

  • Max waiting time (ms): enter the maximum amount of time at the end of which the response to a demand for using a connection should be returned by the connection pool. By default, it is -1, that is to say, infinite.

  • Min number of idle connections: enter the minimum number of idle connections (connections not used) maintained in the connection pool.

  • Max number of idle connections: enter the maximum number of idle connections (connections not used) maintained in the connection pool.

Evict connections

Select this check box to define criteria to destroy connections in the connection pool. The following fields are displayed once you have selected it.

  • Time between two eviction runs: enter the time interval (in milliseconds) at the end of which the component checks the status of the connections and destroys the idle ones.

  • Min idle time for a connection to be eligible to eviction: enter the time interval (in milliseconds) at the end of which the idle connections are destroyed.

  • Soft min idle time for a connection to be eligible to eviction: this parameter works the same way as Min idle time for a connection to be eligible to eviction but it keeps the minimum number of idle connections, the number you define in the Min number of idle connections field.

Usage

Usage rule

This component is used as an end component and requires an input link.

This component needs a Write component such as tWriteJSONField to define a serializedValue column in the input schema to send serialized data.

This component, along with the Spark Streaming component Palette it belongs to, appears only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs.

Spark Connection

In the Spark Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
  • Yarn mode (Yarn client or Yarn cluster):
    • When using Google Dataproc, specify a bucket in the Google Storage staging bucket field in the Spark configuration tab.

    • When using HDInsight, specify the blob to be used for Job deployment in the Windows Azure Storage configuration area in the Spark configuration tab.

    • When using Altus, specify the S3 bucket or the Azure Data Lake Storage for Job deployment in the Spark configuration tab.
    • When using Qubole, add a tS3Configuration to your Job to write your actual business data in the S3 system with Qubole. Without tS3Configuration, this business data is written in the Qubole HDFS system and destroyed once you shut down your cluster.
    • When using on-premises distributions, use the configuration component corresponding to the file system your cluster is using. Typically, this system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the configuration component corresponding to the file system your cluster is using, such as tHDFSConfiguration Apache Spark Batch or tS3Configuration Apache Spark Batch.

    If you are using Databricks without any configuration component present in your Job, your business data is written directly in DBFS (Databricks Filesystem).

This connection is effective on a per-Job basis.

Limitation

Due to license incompatibility, one or more JARs required to use this component are not provided. You can install the missing JARs for this particular component by clicking the Install button on the Component tab view. You can also find out and add all missing JARs easily on the Modules tab in the Integration perspective of your studio. For details, see Installing external modules. You can find more details about how to install external modules in Talend Help Center (https://help.talend.com).