Setting up the Hadoop connection - 8.0

Setting up context-smart Hadoop connections

Version
8.0
Language
English
Product
Talend Big Data
Talend Data Fabric
Talend Open Studio for Big Data
Talend Real-Time Big Data Platform
Module
Talend Studio
Content
Design and Development > Designing Jobs > Hadoop distributions
Last publication date
2024-02-06

You need first to set up the connection to a given Hadoop environment.

In this article, a Cloudera distribution is used for demonstration purposes.

Before you begin

  • Ensure that the client machine on which the Talend Studio is installed can recognize the host names of the nodes of the Hadoop cluster to be used. For this purpose, add the IP address/hostname mapping entries for the services of that Hadoop cluster in the hosts file of the client machine.

    For example, if the host name of the Hadoop Namenode server is talend-cdh550.weave.local, and its IP address is 192.168.x.x, the mapping entry reads 192.168.x.x talend-cdh550.weave.local.

  • The Hadoop cluster to be used has been properly configured and is running.

  • The Integration perspective is active.

  • Cloudera is the example distribution of the current article. If you are using a different distribution, you may need to bear in mind the particular prerequisites explained as follows:
    • If you need to connect to MapR from Talend Studio, ensure that you have installed the MapR client in the machine where Talend Studio is, and added the MapR client library to the PATH variable of that machine. According to the MapR documentation, the library or libraries of a MapR client corresponding to each OS version can be found under MAPR_INSTALL/hadoop/hadoop-VERSION/lib/native. For example, the library for Windows is \lib\native\MapRClient.dll in the MapR client jar file. For further information, see the following link from MapR: http://www.mapr.com/blog/basic-notes-on-configuring-eclipse-as-a-hadoop-development-environment-for-mapr.

    • If you need to connect to a Google Dataproc cluster, set the path to the Google credentials file associated with the service account to be used in the environment variables of your local machine, so that the Check service feature of the metadata wizard can properly verify your configuration.

      For further information how to set the environment variable, see Getting Started with Authentication of Google documentation.

Procedure

  1. In the Repository tree view of Talend Studio, expand Metadata and then right-click Hadoop cluster.
  2. Select Create Hadoop cluster from the contextual menu to open the Hadoop cluster connection wizard.
  3. Fill in generic information about this connection, such as Name and Description and click Next to open the [Hadoop Configuration Import Wizard] window that allows you to select the distribution to be used and the manual or the automatic mode to configure the connection.
    • Retrieve configuration from Ambari or Cloudera: if you are using a Hortonworks Data Platform or a Cloudera CDH cluster and your cluster contains its specific management platform: Hortonworks Ambari for Hortonworks Data Platform and Cloudera manager for Cloudera CDH, select this check box to directly import the configuration.

    • Import configuration from local files: when you have obtained or you can obtain the configuration files (mainly the *-site.xml files), for example, from the administrator of the Hadoop cluster or downloaded directly from the Web-based cluster management service, use this option to import the properties directly from those files.

    • Enter manually Hadoop services: you click Finish and manually enter the connection parameters.

    On either the automatic approaches or the manual approach, the parameters you need to define are:
    • Namenode URI: enter the URI of the NameNode machine of the cluster to be used.

    • Resource Manager and Resource Manager scheduler: enter the URI pointing to the machine used by the Resource Manager service of your cluster and the address of its scheduler, respectively.

    • Job history: enter the location of the JobHistory server of your cluster. This allows the metrics information of the current Job to be stored in that JobHistory server.

    • Staging directory: enter this directory defined in your Hadoop cluster for temporary files created by running programs. Typically, this directory can be found under the yarn.app.mapreduce.am.staging-dir property in the configuration files such as yarn-site.xml or mapred-site.xml of your distribution.

    • Use datanode hostname: select this check box to allow the Job to access datanodes via their hostnames. This actually sets the dfs.client.use.datanode.hostname property to true.

    • The User name field is available when you are not using Kerberos to authenticate. In the User name field, enter the login user name for your distribution. If you leave it empty, the user name of the machine hosting Talend Studio will be used.

  4. Verify whether your cluster is security-enabled and bear in mind that the security configuration cannot be contextualized.

    If you are accessing the Hadoop cluster running with Kerberos security, select this check box, then, enter the Kerberos principal names for the ResourceManager service and the JobHistory service in the displayed fields. This enables you to use your user name to authenticate against the credentials stored in Kerberos. These principals can be found in the configuration files of your distribution, such as in yarn-site.xml and in mapred-site.xml.

    If you need to use a Kerberos keytab file to log in, select Use a keytab to authenticate. A keytab file contains pairs of Kerberos principals and encrypted keys. You need to enter the principal to be used in the Principal field and the access path to the keytab file itself in the Keytab field. This keytab file must be stored in the machine in which your Job actually runs, for example, on a Talend JobServer.

    Note that the user that executes a keytab-enabled Job is not necessarily the one a principal designates but must have the right to read the keytab file being used. For example, the user name you are using to execute a Job is user1 and the principal to be used is guest; in this situation, ensure that user1 has the right to read the keytab file to be used.

  5. Add the advanced Hadoop properties if they are required by your cluster and bear in mind that these properties cannot be contextualized. Click the [...] button to open the properties table and add the property or properties to be customized. Then at runtime, these changes will override the corresponding default properties used by Talend Studio for its Hadoop engine.
  6. If your Talend Studio instance supports designing Apache Spark Jobs and your cluster expects some advanced Spark properties, select the Use Spark properties check box to open the properties table and add the property or properties to be used. Bear in mind that these properties cannot be contextualized.

    When you reuse this connection in your Apache Spark Jobs, the advanced Spark properties you have added here are automatically added to the Spark configurations for those Jobs.

  7. If you are using Cloudera V5.5+ to run your MapReduce or Apache Spark Batch Jobs, you can select the Use Cloudera Navigator check box to make use of Cloudera Navigator to trace the lineage of given data flow to discover how this data flow was generated by a Job. But bear in mind that the Cloudera Navigator configuration cannot be contextualized.

    With this option activated, you need to set the following parameters:

    • Username and Password: this is the credentials you use to connect to your Cloudera Navigator.

    • Cloudera Navigator URL : enter the location of the Cloudera Navigator to be connected to.

    • Cloudera Navigator Metadata URL: enter the location of the Navigator Metadata.

    • Activate the autocommit option: select this check box to make Cloudera Navigator generate the lineage of the current Job at the end of the execution of this Job.

      Since this option actually forces Cloudera Navigator to generate lineages of all its available entities such as HDFS files and directories, Hive queries or Pig scripts, it is not recommended for the production environment because it will slow the Job.

    • Kill the job if Cloudera Navigator fails: select this check box to stop the execution of the Job when the connection to your Cloudera Navigator fails.

      Otherwise, leave it clear to allow your Job to continue to run.

    • Disable SSL validation: select this check box to make your Job to connect to Cloudera Navigator without the SSL validation process.

      This feature is meant to facilitate the test of your Job but is not recommended to be used in a production cluster.

  8. Click the Check services button to verify that Talend Studio can connect to the NameNode and the ResourceManager services you have specified in this wizard. A dialog box pops up to indicate the checking process and the connection status. If it shows that the connection fails, you need to review and update the connection information you have defined in the connection wizard.
  9. Click Finish to validate your changes and close the wizard.

    The newly set-up Hadoop connection displays under the Hadoop cluster folder in the Repository tree view. This connection has no sub-folders until you create connections to any element under that Hadoop distribution.