tSocketTextStreamInput properties for Apache Spark Streaming - 7.0

tSocketTextStreamInput

author
Talend Documentation Team
EnrichVersion
7.0
EnrichProdName
Talend Data Fabric
Talend Real-Time Big Data Platform
task
Data Governance > Third-party systems > Misc components > tSocketTextStreamInput
Data Quality and Preparation > Third-party systems > Misc components > tSocketTextStreamInput
Design and Development > Third-party systems > Misc components > tSocketTextStreamInput
EnrichPlatform
Talend Studio

These properties are used to configure tSocketTextStreamInput running in the Spark Streaming Job framework.

The Spark Streaming tSocketTextStreamInput component belongs to the Internet family.

The streaming version of this component is available in Talend Real Time Big Data Platform and in Talend Data Fabric.

Basic settings

Host name

Enter the name or the IP address of the server to be connected to.

Port

Enter the number of the listening port of the server to be connected to.

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields.

The schema of this component is read-only. You can click Edit schema to view the schema.

This read-only line column is used to carry the strings to be passed to the next component in the Job. Depending on the format of the strings, you select the corresponding component to process the strings, for example, tExtractJSONFields to process JSON Strings.

Usage

Usage rule

This component is used as a start component and requires an output link.

This component, along with the Spark Streaming component Palette it belongs to, appears only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs.

Spark Connection

In the Spark Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
  • Yarn mode (Yarn client or Yarn cluster):
    • When using Google Dataproc, specify a bucket in the Google Storage staging bucket field in the Spark configuration tab.

    • When using HDInsight, specify the blob to be used for Job deployment in the Windows Azure Storage configuration area in the Spark configuration tab.

    • When using Altus, specify the S3 bucket or the Azure Data Lake store (technical preview) for Job deployment in the Spark configuration tab.
    • When using other distributions, use the configuration component corresponding to the file system your cluster is using. Typically, this system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the configuration component corresponding to the file system your cluster is using, such as tHDFSConfiguration or tS3Configuration.

This connection is effective on a per-Job basis.

Spark Connection

In the Spark Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
  • Yarn mode (Yarn client or Yarn cluster):
    • When using Google Dataproc, specify a bucket in the Google Storage staging bucket field in the Spark configuration tab.

    • When using HDInsight, specify the blob to be used for Job deployment in the Windows Azure Storage configuration area in the Spark configuration tab.

    • When using Altus, specify the S3 bucket or the Azure Data Lake store (technical preview) for Job deployment in the Spark configuration tab.
    • When using other distributions, use the configuration component corresponding to the file system your cluster is using. Typically, this system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the configuration component corresponding to the file system your cluster is using, such as tHDFSConfiguration or tS3Configuration.

This connection is effective on a per-Job basis.