tBigQueryOutputBulk Standard properties - Cloud - 8.0

Google BigQuery

Version
Cloud
8.0
Language
English
Product
Talend Big Data
Talend Big Data Platform
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Real-Time Big Data Platform
Module
Talend Studio
Content
Data Governance > Third-party systems > Cloud storages > Google components > Google BigQuery components
Data Quality and Preparation > Third-party systems > Cloud storages > Google components > Google BigQuery components
Design and Development > Third-party systems > Cloud storages > Google components > Google BigQuery components
Last publication date
2024-02-20

These properties are used to configure tBigQueryOutputBulk running in the Standard Job framework.

The Standard tBigQueryOutputBulk component belongs to the Big Data family.

The component in this framework is available in all Talend products.

Basic settings

Schema and Edit schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields.

  • Built-In: You create and store the schema locally for this component only.

  • Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs.

Click Edit schema to make changes to the schema. If you make changes, the schema automatically becomes built-in.

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion.

    If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the Repository Content window.

 
  • The Record type of BigQuery is not supported.
  • The columns for table metadata such as the Description column or the Mode column cannot be retrieved.
  • The Timestamp data from your BigQuery system is formatted to be String data.
  • The numeric data of BigQuery is converted to BigDecimal.

File name

Browse, or enter the path to the .txt or .csv file you need to generate.

Append

Select the check box to write new data at the end of the existing data. Otherwise, the existing data will be overwritten.

Advanced settings

Use a custom endpoint Select this check box to use a private endpoint rather than the default one.
When selected, enter the URL in the following properties:
  • Google Storage Private API URL by respecting the "https://storage.googleapis.com" format.
  • Google BigQuery Private API URL by respecting the "https://bigquery.googleapis.com" format.

For more information, see Access Google APIs through endpoints in the Google Cloud documentation.

This property is only available when you authenticate using Service account.

Field Separator

Enter a character, a string, or a regular expression to separate fields for the transferred data.

Use custom null marker

Select this option to use a specific character as the null marker. You can specify the null marker in double quotation markers in the text frame to the right.

This option prevents errors caused by fields with null values.

Create directory if not exists

Select this check box to create the directory you defined in the File field for Google Cloud Storage, if it does not exist.

Custom the flush buffer size

Enter the number of rows to be processed before the memory is freed.

Check disk space

Select the this check box to throw an exception during execution if the disk is full.

Encoding

Select the encoding from the list or select Custom and define it manually. This field is compulsory for database data handling. The supported encodings depend on the JVM that you are using. For more information, see https://docs.oracle.com.

tStatCatcher Statistics

Select this check box to collect the log data at the component level/

Global Variables

Global Variables

NB_LINE: the number of rows read by an input component or transferred to an output component. This is an After variable and it returns an integer.

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl+Space to access the variable list and choose the variable to use from it.

For more information about variables, see Using contexts and variables.

Usage

Usage rule

This is an output component which needs the data provided by its preceding component.

This component automatically detects and supports both multi-regional locations and regional locations. When using the regional locations, the buckets and the datasets to be used must be in the same locations.