tBigtableOutput Standard properties - Cloud - 8.0

Google Bigtable

Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Open Studio for Big Data
Talend Open Studio for Data Integration
Talend Open Studio for ESB
Talend Real-Time Big Data Platform
Talend Studio
Data Governance > Third-party systems > NoSQL components > Google Bigtable components
Data Quality and Preparation > Third-party systems > NoSQL components > Google Bigtable components
Design and Development > Third-party systems > NoSQL components > Google Bigtable components

These properties are used to configure tBigtableOutput running in the Standard Job framework.

The Standard tBigtableOutput component belongs to the Cloud family.

The component in this framework is available in all Talend products.

Basic settings

Property Type

Select the way the connection details will be set.

  • Built-In: The connection details will be set locally for this component. You need to specify the values for all related connection properties manually.

  • Repository: The connection details stored centrally in Repository > Metadata will be reused by this component.

    You need to click the [...] button next to it and in the pop-up Repository Content dialog box, select the connection details to be reused, and all related connection properties will be automatically filled in.

This option is not available when you select the Use an existing connection option and select a component from the Component List drop-down list.

Use an existing connection

Select this check box and in the Component List drop-down list, select the desired connection component to reuse the connection details you already defined.

Project name

Enter the unique identifier of your Google Cloud Platform project. This information is available on the Dashboard page of your Google Cloud console.

Instance ID

Enter the permanent identifier of the Bigtable instance.

Google credentials

Paste the content of the JSON file that contains your service account key.

Table name

Enter the name of the Bigtable table you need to transfer data to.


A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields.

  • Built-In: You create and store the schema locally for this component only.

  • Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs.

Edit schema

Click Edit schema to make changes to the schema. If you make changes, the schema automatically becomes built-in.

Action on data

Select the action to be performed from the drop-down list when transferring data to the target table:

  • Insert: inserts new items from the input flow.

  • Delete: removes existing items according to the input flow.

Advanced settings

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at the Job level as well as at each component level.

Max batch size

Set the maximum number of lines allowed in each batch.

Global Variables


The error message generated by the component when an error occurs. This is an After variable and it returns a string.


The number of rows processed. This is an After variable and it returns an integer.


Usage rule

tBigtableOuput is used as an end component and requires an input link such as tBigtableConnection.