Cloudera Impala Hadoop Hive Server - Import - 7.1

Talend Data Catalog Bridges

author
Talend Documentation Team
EnrichVersion
7.1
EnrichProdName
Talend Big Data Platform
Talend Data Fabric
Talend Data Management Platform
Talend Data Services Platform
Talend MDM Platform
Talend Real-Time Big Data Platform
EnrichPlatform
Talend Data Catalog

Bridge Specifications

Vendor Cloudera
Tool Name Impala Hadoop Hive Database
Tool Version 0.13
Tool Web Site http://www.cloudera.com/products/apache-hadoop/impala.html
Supported Methodology [Relational Database] Data Store (Physical Data Model) via JDBC API
Incremental Harvesting
Multi-Model Harvesting
Remote Repository Browsing for Model Selection
Data Profiling

BRIDGE INFORMATION
Import tool: Cloudera Impala Hadoop Hive Database 0.13 (http://www.cloudera.com/products/apache-hadoop/impala.html)
Import interface: [Relational Database] Data Store (Physical Data Model) via JDBC API from Cloudera Impala Hadoop Hive Server
Import bridge: 'ClouderaImpala' 10.1.0

BRIDGE DOCUMENTATION
IMPORTING FROM CLOUDERA IMPALA USING JDBC.

This bridge establishes a JDBC connection to the Cloudera's Impala server in order to extract the physical metadata. In case of a very large hive database, this bridge can also establish a JDBC connection to the hive metastore server (see all parameter names starting with Metastore) in order to accelerate the extraction of the physical metadata. It is critical that the parameters are filled correctly to satisfy the local connection requirements on the client workstation that runs the bridge. Please refer to the individual parameter's tool tips for more detailed examples.

If you have custom SerDe for one or more of your tables, then they should be part of the hive class path. You can add them to HIVE_HOME/lib directory so its picked up automatically. If hive cannot find the custom serializer for a given table, then it will throw an exception and the bridge will skip reading metadata for that table.

Please be sure to specify a JBDC driver and its dependent jar files.

For version 4.x, these jar files are as follows and are typically available under /usr/lib/hive/lib:
- antlr-runtime-X.X.jar
- commons-logging-api-X.X.X.jar
- derby-XX.X.X.X.jar
- hive-exec-X.XX.X-cdh4.X.X.jar
- hive-jdbc-X.XX.X-cdh4.X.X.jar
- hive-metastore-X.XX.X-cdh4.X.X.jar
- hive-service-X.XX.X-cdh4.X.X.jar
- jdo2-api-X.X-ec.jar
- libfb303-X.X.X.jar
- log4j-X.X.XX.jar
- slf4j-api-X.X.X.jar

Under /usr/lib/hadoop:
- hadoop-common-X.X.X-cdh4.X.X.jar

Under /usr/lib/impala/lib:
- slf4j-logXjXX-X.X.X.jar


For version 5.1, the required jar files are as follows and typically are available under /usr/lib/hive/lib:
- antlr-runtime-X.X.jar
- commons-logging-X.X.X.jar
- derby-XX.X.X.X.jar
- hive-common-X.XX.X-cdh5.X.X.jar
- hive-exec-X.XX.X-cdh5.X.X.jar
- hive-jdbc-X.XX.X-cdh5.X.X.jar
- hive-metastore-X.XX.X-cdh5.X.X.jar
- hive-service-X.XX.X-cdh5.X.X.jar
- httpclient-X.X.X.jar
- httpcore-X.X.X.jar
- libfb303-X.X.X.jar
- libthrift-X.X.X.jar
- log4j-X.X.XX.jar
slf4j-api-X.X.X.jar

Under /usr/lib/hadoop:
- hadoop-common-X.X.X-cdh5.X.X.jar

Under /usr/lib/impala/lib:
- slf4j-logXjXX-X.X.X.jar


For version 5.3.x, according to Cloudera docs, add /usr/lib/hive/lib/*.jar and /usr/lib/hadoop/*.jar to your classpath. http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cdh_ig_hive_jdbc_install.html

For version 5.4.x, the required jar files are as follows and typically are available under /usr/lib/hadoop:
- hadoop-common-X.X.X-cdh5.X.X.jar

under /usr/lib/hive/lib:
- hive-jdbc-standalone.jar
- log4j-X.X.XX.jar
- slf4j-api-X.X.X.jar

For version 5.5.x, the required Impala jar files may be downloaded from Cloudera site:
- hive_metastore.jar
- hive_service.jar
- ImpalaJDBC4.jar
- libfb303-0.9.0.jar
- libthrift-0.9.0.jar
- log4j-1.2.14.jar
- ql.jar
- slf4j-api-1.5.11.jar
- slf4j-log4j12-1.5.11.jar
- TCLIServiceClient.jar
- zookeeper-3.4.6.jar


FREQUENTLY ASKED QUESTIONS

Q: How do I test/debug the Windows JDBC client connection to a secured (e.g. Kerberos) Hadoop Hive server?
A: In order to establish connection to a secured (e.g. Kerberos) Hadoop Hive server, you must use the proper Hadoop distribution vendor specific URL (parameter Url), and the security related parameters of this bridge (Kerberos configuration file, Keytab file, User proxy, and Miscellaneous). Follow the instructions within each of these parameter's tool tips.
In order to test/debug the connectivity to a secured (e.g. Kerberos) Hadoop Hive server, you must have access to all the Hadoop server and Kerberos messages which are not always transmitted to the user interface. Therefore, you can use the utility provided at '${MODEL_BRIDGE_HOME}\bin\hive_test.bat'. Edit that batch file to provide similar parameters, and run that script to analyze the Hadoop Hive server and Kerberos messages.


Bridge Parameters

Parameter Name Description Type Values Default Scope
Java library directory Directory containing JAR files necessary to access HiveServer2:
- JDBC driver JAR files specific for the Hadoop distribution
- Hadoop client JAR files (when using Kerberos secure mode. Usually available at /usr/lib/hadoop/client)
- Java Cryptography Extension JAR files (when using high strength encryption. Usually available at Oracle Java website)
- 'bin' directory with winutils.exe to avoid non-critical exceptions when running on Windows without Hadoop installed

Generic Apache Hive documentation for HiveServer2 JDBC Clients connectivity: https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-RunningtheJDBCSampleCode
DIRECTORY     Mandatory
Url Enter the JBDC url where Hive2 server or Impala server is running.
Example:

jdbc:hive2://COMPUTER_NAME_OR_IP:10000/
OR
jdbc:impala://COMPUTER_NAME_OR_IP:21050/
STRING   jdbc:hive2://COMPUTER_NAME_OR_IP:10000/ Mandatory
User The Hive username on whose behalf the connection is being made. STRING      
Password The Hive user's password on whose behalf the connection is being made PASSWORD      
Schema Description: you can specify a list of Hive schemas to import.
When the list is empty, all available schemas are imported.
The list can have one or more schema names separated by semicolons (e.g. schema1; schema2).

You can specify schema name patterns using '%' wilcard symbol or 'NOT' keyword.

Patterns support inclusions and exclusions.
Here is an example of inclusion syntax, "A%; %B; %C%; D" that tries to get schema names that:
- start with A or
- end with B or
- contain C or
- equal D

To exclude a pattern, prefix it with 'NOT'. Here is an example of exclusion syntax, "A%; NOT %def"
that imports schemas with name started with 'A' and not ended with 'def'
REPOSITORY_SUBSET      
Table Description: you can specify a list of Hive tables to import.
When the list is empty, all available tables are imported.
The list can have one or more table names separated by semicolons (e.g. sample_07; sample_08).

You can specify table name patterns using '%' wilcard symbol or 'NOT' keyword.

Patterns support inclusions and exclusions.
Here is an example of inclusion syntax, "A%; %B; %C%; D" that tries to get table names that:
- start with A or
- end with B or
- contain C or
- equal D

To exclude a pattern, prefix it with 'NOT'. Here is an example of exclusion syntax, "A%; NOT %def"
that imports tables with name started with 'A' and not ended with 'def'
STRING      
SerDe jars list This option may be used to specify a semicolon separated list of fully qualified path names to the SerDe jars the bridge will use to execute remotely on the Hive system. STRING      
Kerberos configuration file To overwride Kerberos configuration at the client side. Example:
/etc/krb5/krb5.conf
STRING      
Keytab file In the case where Kerberos is used for Hive authentication, this option may be used to specify the fully qualified path name to the Kerberos keytab file. Example:
/etc/security/keytabs/hive.service.keytab
STRING      
User proxy In the case where Kerberos is used for Hive authentication, this option may be used to specify a proxy user or group name. STRING      
Multiple threads Number of worker threads to harvest metadata asynchronously.
Leave the parameter blank to have the bridge compute the value, between 1 and 6, based on JVM architecture and number of available CPU cores.
Specify a numeric value greater or equal to 1 to provide the actual number of threads.
If the value specified is invalid, a warning will be issued and 1 will be used instead.
If you experience out of memory conditions when harvesting metadata asynchronously, experiment with smaller numbers.
If your machine has a lot of available memory (e.g. 10 Gb or more), you can try larger numbers when harvesting many documents at once.
Note that setting the number too high can actually decrease the performance due to resource contention.
NUMERIC      
Miscellaneous Specify miscellaneous options identified with a -letter and value.

For example, -m 4G -f 100 -j -Dname=value -Xms1G

-f the database driver fetch size in number of rows (e.g. -f 100)

-m the maximum Java memory size whole number (e.g. -m 4G or -m 2500M ).

-v set environment variable(s) (e.g. -v var1=value -v var2="value with spaces").

-j the last option that is followed by Java command line options (e.g. -j -Dname=value -Xms1G).

-i specify this flag to import Indexes.

-zip exportFolder folder to store DDL create tables statements as one zip file (e.g -zip c:/temp/ddl)

-d enables the Kerberos debugging mode that allows you to follow the bridge execution of the Kerberos V5 protocol.
The bridge sets the system property sun.security.krb5.debug to "true".
When you have a Kerberos configuration issue please enable the -d option and send the execution log to support.

-location.skip disable tables connections to external files

-partition.allsamples imports all table partitions locations samples into [PartitionLocationsWithSamples] property

-partition.import imports all table partitions locations into a table merged location

-location.pattern Hive location-based partition directories paths.
The bridge tries to detect partitions automatically for standard partitions locations
when the location contains the Hive table name and partitions names.

You can extends the detection process for some or all partitions by specifying them in this parameter.

Specify the Hive partition locations pattern.
Separate multiple paths with the , (or ;) character.

Examples:
-location.pattern hdfs://localhost:9000/user/hive/warehouse/employee
will created employee dataset even if the employee has any others folders
STRING      

 

Bridge Mapping

Meta Integration Repository (MIR)
Metamodel
(based on the OMG CWM standard)
"Cloudera Impala Hadoop Hive Server"
Metamodel
Apache Hive (Database)
Mapping Comments
     
Attribute Column, Partition Column Columns which are part of the partition on the table.
Comment Comment  
Description Description  
ExtraConstraint Constraint  
InitialValue Initial Value  
Name Name  
NativeId Native Id  
Optional Nullable  
Position Position  
Class Table  
Comment Comment  
Description Description  
Name Name  
NativeId Native Id  
ClassDiagram Diagram  
Description Description  
Name Name  
ConnectionPackage HDFS Folder  
Description Description  
Name Name  
DatabaseSchema Database  
Comment Comment  
Description Description  
Name Name  
NativeId Native Id  
DesignPackage Subject Area  
Description Description  
Name Name  
FlatTextFile External File  
Description Description  
Name Name  
Index Index, CLUSTERED Clustered by or bucketed columns
Comment Comment  
Description Description  
Name Name  
NativeId Native Id  
Join Logical Relationship  
Description Description  
Name Name  
SQLViewAttribute View Column  
Comment Comment  
Description Description  
Name Name  
NativeId Native Id  
Position Position  
SQLViewEntity View  
Comment Comment  
Description Description  
Name Name  
NativeId Native Id  
ViewStatement View Statement  
StoreConnection External Tables Connections  
Description Description  
Name Name  
StoreModel Hive Model  
Author Author  
Comment Comment  
CreationTime Creation Time  
Description Description  
ModificationTime Modification Time  
Modifier Modifier  
Name Name  
NativeId Native Id  
StoreType Store Type  
SystemMajorVersion System Major Version  
SystemMinorVersion System Minor Version  
SystemReleaseVersion System Release Version  
SystemType System Type  
SystemTypeOld System Type Old