Reading and writing Avro data from Kafka using Standard Jobs - Cloud - 8.0

Kafka

Version
Cloud
8.0
Language
English
Product
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Open Studio for Big Data
Talend Real-Time Big Data Platform
Module
Talend Studio
Content
Data Governance > Third-party systems > Messaging components (Integration) > Kafka components
Data Quality and Preparation > Third-party systems > Messaging components (Integration) > Kafka components
Design and Development > Third-party systems > Messaging components (Integration) > Kafka components

This scenario explains how to use schema registry with deserializer and how to handle Avro data using ConsumerRecord and ProducerRecord from Kafka components in your Standard Jobs.

For more technologies supported by Talend, see Talend components.

In this scenario, you first create a Standard Job to read Avro data using ConsumerRecord that will be called the reading Job:
And then you create another Standard Job to write Avro data using ProducerRecord that will be called the writing Job:

This scenario applies only to Talend products with Big Data and Talend Data Fabric.