Defining Cloudera Data Engineering connection parameters with Spark Universal - Cloud - 8.0

Talend Studio User Guide

Version
Cloud
8.0
Language
English
Product
Talend Big Data
Talend Big Data Platform
Talend Cloud
Talend Data Fabric
Talend Data Integration
Talend Data Management Platform
Talend Data Services Platform
Talend ESB
Talend MDM Platform
Talend Real-Time Big Data Platform
Module
Talend Studio
Content
Design and Development
Last publication date
2024-02-29
Available in...

Big Data

Big Data Platform

Cloud Big Data

Cloud Big Data Platform

Cloud Data Fabric

Data Fabric

Real-Time Big Data Platform

About this task

Talend Studio connects to Cloudera Data Engineering (CDE) service to run the Spark Job from this cluster.

Procedure

  1. Click the Run view beneath the design workspace, then click the Spark configuration view.
  2. Select Built-in from the Property type drop-down list.
    If you have already set up the connection parameters in the Repository as explained in Centralizing a Hadoop connection, you can easily reuse it. To do this, select Repository from the Property type drop-down list, then click […] button to open the Repository Content dialog box and select the Hadoop connection to be used.
    Tip: Setting up the connection in the Repository allows you to avoid configuring that connection each time you need it in the Spark configuration view of your Jobs. The fields are automatically filled.
  3. Select Universal from the Distribution drop-down list, the Spark version from the Version drop-down list, and Cloudera Data Engineering from the Runtime mode/environment drop-down list.
  4. If you need to launch your Spark Job from Windows, specify where the winutils.exe program to be used is stored:
    • If you know where to find your winutils.exe file and you want to use it, select the Define the Hadoop home directory check box and enter the directory where your winutils.exe is stored.

    • Otherwise, leave the Define the Hadoop home directory check box clear, Talend Studio generates one by itself and automatically uses it for this Job.

  5. Enter the basic Configuration information:
    Option Usage
    Use local timezone Select this check box to let Spark use the local time zone provided by the system.
    Note:
    • If you clear this check box, Spark use UTC time zone.
    • Some components also have the Use local timezone for date check box. If you clear the check box from the component, it inherits time zone from the Spark configuration.
    Use dataset API in migrated components Select this check box to let the components use Dataset (DS) API instead of Resilient Distributed Dataset (RDD) API:
    • If you select the check box, the components inside the Job run with DS which improves performance.
    • If you clear the check box, the components inside the Job run with RDD which means the Job remains unchanged. This ensures the backwards compatibility.

    This check box is selected by default, but if you import a Job from 7.3 backwards, the check box will be cleared as those Jobs run with RDD.

    Important: If your Job contains tDeltaLakeInput and tDeltaLakeOutput components, you must select this check box.
    Use timestamp for dataset components Select this check box to use java.sql.Timestamp for dates.
    Note: If you leave this check box clear, java.sql.Timestamp or java.sql.Date can be used depending on the pattern.
  6. Complete the CDE configuration parameters:
    Parameter Usage
    CDE API endpoint Enter the CDE API endpoint. You can find the URL from JOBS API URL link.
    CDE API token Enter the CDE token you use for API authentication. The URL must respect the following format: [BASE_URL]/gateway/authtkn. For more information, see CDE API access token from Cloudera documentation.

    This property is available only when Auto generate token check box is cleared.

    Auto generate token Select this check box to create a new token before a Job is submitted.
    • CDE token endpoint: enter the CDE token you want to use.
    • Workload user: enter the CDP workload user you want to use to generate a new token. For more information, see CDP workload user from Cloudera documentation.
    • Workload password: enter the password associated with the workload user.
    Enable client debugging Select this check box to enable debug logging for CDE API client.
    Override dependencies Select this check box to override the dependencies needed for Spark.
    Job status/logs polling interval (in ms) Enter the time interval (in milliseconds) at the end of which you want Talend Studio to ask Spark for the status of your Job.
    Fetch driver logs Select this check box to fetch the driver logs at runtime. You can choose to fetch the following information by selecting the check box:
    • Standard output
    • Error output
    Advanced parameters Select this check box to enter the following CDE API advanced parameters:
    • Number of executors: enter the number of executors.
    • Driver cores: enter the number of driver cores.
    • Driver memory: enter the allocation size of memory to be used by the driver.
    • Executor cores: enter the number of executor cores.
    • Executor memory: enter the allocation size of memory to be used by each executor.
  7. In the Spark "scratch" directory field, enter the directory in which Talend Studio stores in the local system the temporary files such as the jar files to be transferred. If you launch the Job on Windows, the default disk is C:. So if you leave /tmp in this field, this directory is C:/tmp.
  8. Activate checkpointing
  9. In the Advanced properties table, add any Spark properties you need to use to override their default counterparts used by Talend Studio.

Results

The connection details are complete, you are ready to schedule executions of your Job or to run it immediately.