tFlumeInput properties for Apache Spark Streaming - 6.5


Talend Documentation Team
Talend Data Fabric
Talend Real-Time Big Data Platform
Data Governance > Third-party systems > Messaging components (Integration) > Flume components
Data Quality and Preparation > Third-party systems > Messaging components (Integration) > Flume components
Design and Development > Third-party systems > Messaging components (Integration) > Flume components
Talend Studio

These properties are used to configure tFlumeInput running in the Spark Streaming Job framework.

The Spark Streaming tFlumeInput component belongs to the Messaging family.

The streaming version of this component is available in Talend Real Time Big Data Platform and in Talend Data Fabric.

Basic settings

Host and Port

Enter the hostname and the port of the machine used as the sink (the data output point bound to the channel of a Flume agent) to receive data from Flume.

  • If you select As Receiver from the Type drop-down list, this machine must be one of the machines on which a Spark worker runs and the hostname must be the same as the one used by the resource manager of the Spark cluster to be used.

  • If you select As Sink from the Type drop-down list, this machine must be a sink in a Flume agent and be accessible to the Spark cluster.


Select the approach to read data from Flume.

  • As Receiver: this is the Push-based approach typically employed by Flume. In this approach, a machine from the Spark cluster is set up as an agent to receive data pushed by Flume and the Spark Streaming Job you are designing reads data from this agent.

  • As Sink: this is the Pull-based approach. In this approach, a machine is set up as sink to buffer data pushed by Flume and the Spark Streaming Job you are designing pulls data from this sink.

For further information about these two approaches, see

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to Repository. When you create a Spark Job, avoid the reserved word line when naming the fields.

Built-In: You create and store the schema locally for this component only.

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs.

This read-only line column is used by tFlumeInput to automatically extract the body of an input Flume event and construct an RDD along with the other columns used to store the header of the same event.

Advanced settings


Select the encoding from the list or select Custom and define it manually.

This encoding is used by tFlumeInput to decode the input event arrays.


Usage rule

This component is used as a start component and requires an output link.

At runtime, the tFlumeInput component keeps listening to the sink and reads new events once they are buffered in this sink.

This component, along with the Spark Streaming component Palette it belongs to, appears only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs.

Spark Connection

You need to use the Spark Configuration tab in the Run view to define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
  • Yarn mode: when using Google Dataproc, specify a bucket in the Google Storage staging bucket field in the Spark configuration tab; when using other distributions, use a tHDFSConfiguration component to specify the directory.

  • Standalone mode: you need to choose the configuration component depending on the file system you are using, such as tHDFSConfiguration or tS3Configuration.

This connection is effective on a per-Job basis.


Due to license incompatibility, one or more JARs required to use this component are not provided. You can install the missing JARs for this particular component by clicking the Install button on the Component tab view. You can also find out and add all missing JARs easily on the Modules tab in the Integration perspective of your studio. For details, see Installing external modules.