tSplunkEventCollector Standard properties - 7.3

Splunk

Version
7.3
Language
English
Product
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Real-Time Big Data Platform
Module
Talend Studio
Content
Data Governance > Third-party systems > Business Intelligence components > Splunk components
Data Quality and Preparation > Third-party systems > Business Intelligence components > Splunk components
Design and Development > Third-party systems > Business Intelligence components > Splunk components
Last publication date
2024-02-21

These properties are used to configure tSplunkEventCollector running in the Standard Job framework.

The Standard tSplunkEventCollector component belongs to the Business Intelligence family.

The component in this framework is available in all Talend products.

Basic settings

Schema and Edit schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields.

  • Built-In: You create and store the schema locally for this component only.

  • Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs.

This component offers the advantage of the dynamic schema feature. This allows you to retrieve unknown columns from source files or to copy batches of columns from a source without mapping each column individually. For further information about dynamic schemas, see Talend Studio User Guide.

This dynamic schema feature is designed for the purpose of retrieving unknown columns of a table and is recommended to be used for this purpose only; it is not recommended for the use of creating tables.

Note that the schema of this component has been set by default with the following fields. You can click the [...] button next to Edit schema to view and change the predefined schema.
  • time: the event time. Note that the input data is in Java Date format, and it will be transformed to the epoch time format required by Splunk before sending to Splunk HTTP Event Collector.

  • source: the source value of the event data. It is usually the file or directory path, network port, or script from which the event originated.

  • sourcetype: the source type of the event data. It tells what kind of data it is.

  • host: the host of the event data. It is usually the host name, IP address, or fully qualified domain name of the network machine from which the event originated.

  • index: the name of the index by which the event data is to be indexed. It must be within the list of allowed indexes if the token has the indexes parameter set.

For more information about the format of the event data sent to Splunk HTTP Event Collector, see About the JSON event protocol in HTTP Event Collector.

 

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the Repository Content window.

Splunk Server URL

Enter the URL used to access the Splunk Web Server.

Token

Specify the Event Collector token used to authenticate the event data. For more information, see HTTP Event Collector token management.

Advanced settings

Extended output

Select this check box to send the event data to Splunk in batch mode. In the field displayed, enter the number of events to be processed in each batch.

By default, this check box is selected and the number of events to be processed in each batch is 100.

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at the Job level as well as at each component level.

Global Variables

Global Variables

NB_LINE: the number of rows processed. This is an After variable and it returns an integer.

RESPONSE_CODE: the response code from Splunk. This is an After variable and it returns an integer.

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl + Space to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio User Guide.

Usage

Usage rule

This component is usually used as an end component of a Job or subJob and it always needs an input link.