tKafkaInput Standard properties - Cloud - 8.0

Kafka

Version
Cloud
8.0
Language
English
Product
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Real-Time Big Data Platform
Module
Talend Studio
Content
Data Governance > Third-party systems > Messaging components (Integration) > Kafka components
Data Quality and Preparation > Third-party systems > Messaging components (Integration) > Kafka components
Design and Development > Third-party systems > Messaging components (Integration) > Kafka components
Last publication date
2024-02-29

These properties are used to configure tKafkaInput running in the Standard Job framework.

The Standard tKafkaInput component belongs to the Internet family.

The component in this framework is available in all Talend products with Big Data and in Talend Data Fabric.

Basic settings

Schema and Edit schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields.

Note that the schema of this component is read-only. It stores the messages sent from the message producer.

Output type

Select the type of data to be sent to the next component from the drop-down list:
  • String: the component sends messages serialized into strings.
  • byte[]: the component sends messages serialized into byte arrays.
  • ConsumerRecord: the component sends messages serialized as key/value pair. The message key and the message value can be serialized as Avro.

Typically, using String is recommended, because tKafkaInput can automatically translate the Kafka byte[] messages into strings to be processed by the Job. However, in case that the format of Kafka messages is not known to tKafkaInput, such as Protobuf, you can select byte[] and then use a Custom code component such as tJavaRow to deserialize the messages into strings so that the other components of the same Job can process these messages.

Use an existing connection

Select this check box and in the Component List drop-down list, select the desired connection component to reuse the connection details you already defined.

Version

Select the version of the Kafka cluster to be used.

If you have installed the 8.0.1-R2024-02 Talend Studio Monthly update or a later one delivered by Talend, the Kafka 2.4.x backwards versions are deprecated.

Zookeeper quorum list

Enter the address of the ZooKeeper service of the Kafka cluster to be used.

The form of this address should be hostname:port. This information is the name and the port of the hosting node in this Kafka cluster.

If you need to specify several addresses, separate them using a comma (,).

This field is available to Kafka 0.8.2.0 only.

Broker list

Enter the addresses of the broker nodes of the Kafka cluster to be used.

The form of this address should be hostname:port. This information is the name and the port of the hosting node in this Kafka cluster.

If you need to specify several addresses, separate them using a comma (,).

This field is available since Kafka 0.9.0.1.

Topic name

Enter the name of the topic from which tKafkaInput receives the feed of messages.

Consumer group ID

Enter the name of the consumer group to which you want the current consumer (the tKafkaInput component) to belong.

This consumer group will be created at runtime if it does not exist at that moment.

Reset offsets on consumer group

Select this check box to clear the offsets saved for the consumer group to be used so that this consumer group is handled as a new group that has not consumed any messages.

New consumer group starts from

Select the starting point from which the messages of a topic are consumed.

In Kafka, the increasing ID number of a message is called offset. When a new consumer group starts, from this list, you can select beginning to start consumption from the oldest message of the entire topic, or select latest to wait for a new message.

Note that the consumer group takes into account only the offset-committed messages to start from.

Each consumer group has its own counter to remember the position of a message it has consumed. For this reason, once a consumer group starts to consume messages of a given topic, a consumer group recognizes the latest message only with regard to the position where this group stops the consumption, rather than to the entire topic. Based on this principle, the following behaviors can be expected:

  • If you are resuming an existing consumer group, this option determines the starting point for this consumer group only if it does not already have a committed starting point. Otherwise, this consumer group starts from this committed starting point. For example, a topic has 100 messages. If an existing consumer group has successfully processed 50 messages, and has committed their offsets, then the same consumer group restarts from the offset 51.

  • If you create a new consumer group or reset an existing consumer group, which, in either case, means this group has not consumed any message of this topic, then when you start it from latest, this new group starts and waits for the offset 101.

Auto-commit offsets

Select this check box to make tKafkaInput automatically save its consumption state at the end of each given time interval. You need to define this interval in the Interval field that is displayed.

Note that the offsets are committed only at the end of each interval. If your Job stops in the middle of an interval, the message consumption state within this interval is not committed.

Stop after a maximum total duration (ms)

Select this check box and in the pop-up field, enter the duration (in milliseconds) at the end of which tKafkaInput stops running.

Stop after receiving a maximum number of messages

Select this check box and in the pop-up field, enter the maximum number of messages you want tKafkaInput to receive before it automatically stops running.

Stop after maximum time waiting between messages (ms)

Select this check box and in the pop-up field, enter the waiting time (in milliseconds) by tKafkaInput for a new message. If tKafkaInput does not receive any new message when this waiting time meets its end, it automatically stops running.

Use SSL/TLS

Select this check box to enable the SSL or TLS encrypted connection.

This check box is available since Kafka 0.9.0.1.

Set keystore

Select this check box to enable the SSL or TLS encrypted connection via a tSetKeystore component.

Then you need to use the tSetKeystore component in the same Job to specify the encryption information.

This check box is available when you select the Use SSL/TLS check box.

Note: This option is available when you have installed the 8.0.1-R2022-05 Talend Studio Monthly update or a later one delivered by Talend. For more information, check with your administrator.

Use Kerberos authentication

If the Kafka cluster to be used is secured with Kerberos, select this check box to display the related parameters to be defined:

  • JAAS configuration path: enter the path, or browse to the JAAS configuration file to be used by the Job to authenticate as a client to Kafka.

    This JAAS file describes how the clients, the Kafka-related Jobs in terms of Talend , can connect to the Kafka broker nodes, using either the kinit mode or the keytab mode. The JAAS file must be stored in the machine where these Jobs are executed.

    Talend , Kerberos or Kafka does not provide this JAAS file. You need to create it by following the explanation in Configuring Kafka client depending on the security strategy of your organization.

  • Kafka brokers principal name: enter the primary part of the Kerberos principal you defined for the brokers when you were creating the broker cluster. For example, in this principal kafka/kafka1.hostname.com@EXAMPLE.COM, the primary part to be used to fill in this field is kafka.

  • Set kinit command path: Kerberos uses a default path to its kinit executable. If you have changed this path, select this check box and enter the custom access path.

    If you leave this check box clear, the default path is used.

  • Set Kerberos configuration path: Kerberos uses a default path to its configuration file, the krb5.conf file (or krb5.ini in Windows) for Kerberos 5 for example. If you have changed this path, select this check box and enter the custom access path to the Kerberos configuration file.

    If you leave this check box clear, a given strategy is applied by Kerberos to attempt to find the configuration information it requires. For details about this strategy, see the Locating the krb5.conf Configuration File section in Kerberos requirements.

For further information about how a Kafka cluster is secured with Kerberos, see Authenticating using SASL.

This check box is available since Kafka 0.9.0.1.

Advanced settings

Kafka properties

Add the Kafka consumer properties you need to customize to this table. For example, you can set a specific zookeeper.connection.timeout.ms value to avoid ZkTimeoutException.

You can also set security properties such as SSL encryption with ssl.truststore.location or ssl.keystore.location.

For further information about the consumer properties you can define in this table, see the section describing the consumer configuration in Kafka's documentation in http://kafka.apache.org/documentation.html#consumerconfigs.

Apply security properties after advanced Kafka properties

Select this check box to take first into account the security properties set in the Kafka properties table from the Advanced settings view rather than the security properties set in the tSetKeyStore component when the Use SSL/TLS check box from the Basic settings view is selected.

Timeout precision(ms)

Enter the time duration in millisecond at the end of which you want a timeout exception to be returned if no message is available for consumption.

The value -1 indicates that no timeout is set.

Use schema registry

Select this check box to use Confluent Schema Registry and to display the related parameters to be defined:
  • URL: enter the Schema Registry instance URL.
  • Basic authentication: select this check box and enter your credentials in the Username and Password.
  • Set schema registry keystore: select this check box to enable the SSL or TLS encrypted connection. Then you need to use the tSetKeystore component in the same Job to specify the encryption information. This check box is not available when you already set a tSetKeystore in the Basic settings view of the component because Kafka SSL configuration is reused for schema registry.
  • Key deserializer and Value deserializer: select the schema format to use for the key and value from the drop-down list. The default Custom serializer is org.apache.kafka.common.serialization.ByteArraySerializer.

For more information about Schema Registry, see the Confluent documentation.

This option is only available when you select ConsumerRecord from the Output type drop-down list in the Basic settings view.

Note: This option is available when you have installed the 8.0.1-R2022-01 Talend Studio Monthly update or a later one delivered by Talend. For more information, check with your administrator.

Load the offset with the message

Select this check box to output the offsets of the consumed messages to the next component. When selecting it, a read-only column called offset is added to the schema.

This property is only available when you select String or byte[] from the Output type drop-down list in the Basic settings view.

Custom encoding

You may encounter encoding issues when you process the stored data. In that situation, select this check box to display the Encoding list.

Select the encoding from the list or select Custom and define it manually.

This property is only available when you select String or byte[] from the Output type drop-down list in the Basic settings view.

tStatCatcher Statistics

Select this check box to gather the processing metadata at the Job level as well as at each component level.

Global Variables

ERROR_MESSAGE

The error message generated by the component when an error occurs. This is an After variable and it returns a string.

Usage

Usage rule

This component is used as a start component and requires an output link. When the Kafka topic it needs to use does not exist, it can be used along with the tKafkaCreateTopic component to read the topic created by the latter component.