MapR Hadoop HiveQL DDL - Import - 7.1

Talend Data Catalog Bridges

author
Talend Documentation Team
EnrichVersion
7.1
EnrichProdName
Talend Big Data Platform
Talend Data Fabric
Talend Data Management Platform
Talend Data Services Platform
Talend MDM Platform
Talend Real-Time Big Data Platform
EnrichPlatform
Talend Data Catalog

Bridge Specifications

Vendor MapR
Tool Name Hadoop Hive Database
Tool Version 0.13
Tool Web Site http://www.mapr.com/products
Supported Methodology [Relational Database] Data Store (Physical Data Model), (Expression Parsing) via SQL TXT File
Incremental Harvesting
Multi-Model Harvesting
Remote Repository Browsing for Model Selection
Data Profiling

BRIDGE INFORMATION
Import tool: MapR Hadoop Hive Database 0.13 (http://www.mapr.com/products)
Import interface: [Relational Database] Data Store (Physical Data Model), (Expression Parsing) via SQL TXT File from MapR Hadoop Hive Database SQL DDL Script
Import bridge: 'DdlScriptApacheHiveQLImport.MapR' 10.1.0

BRIDGE DOCUMENTATION
WARNING: This database DDL SQL script import bridge was designed only for the DDL statements creating tables, views, etc., and includes limitations. Instead, use the dedicated live database import via JDBC which will generate a complete and detailed data flow lineage integrating all transformations with stored procedures, views, etc. (which might have been created by many such DDL SQL scripts).
The purpose of this HiveQL DDL script import bridge is to detect and parse all its embedded SQL statements in order to generate the exact scope (data models) of the desired data base.


Bridge Parameters

Parameter Name Description Type Values Default Scope
File Select a file that contains DDL scripts to import FILE
*.sql
*.hql
*.ddl
  Mandatory
Default schema The default schema name will be applied only for the objects that don't have a schema qualifier defined. STRING      
Schemas Description: you can specify a list of database schemas to import.
When the list is empty all available schemas are imported.
The list can have one or more schema names separated by semicolons (e.g. schema1; schema2).
You can specify schema name patterns using Teradata SQL LIKE expressions syntax.
Patterns support inclusions and exclusions.
Here is an example of inclusion syntax, "A%; %B; %C%; D" that tries to get schema names that:
- start with A or
- end with B or
- contain C or
- equal D

Note: when a pattern has special characters, like spaces enclosed it in single quote marks (e.g. "'two wo%'; onewo%;").

To exclude a pattern prefix it with 'NOT'. Here is an example of exclusion syntax, "A%; %B; NOT %SYS; NOT 'SYS%'"
that contributes to the following SQL filter: "where (name like A% or name like %B) and (name not like %SYS) and (name not like 'SYS%')"
STRING      
Tables, Views Description: you can specify a list of database tables to import.
When the list is empty all available tables are imported.
The list can have one or more table names separated by semicolons (e.g. table1; table2).
You can specify table name patterns using SQL LIKE expressions syntax.
Patterns support inclusions and exclusions.
Here is an example of inclusion syntax, "A%; %B; %C%; D" that tries to get table names that: - start with A or
- end with B or
- contain C or
- equal D

Note: when a pattern has special characters, like spaces enclose it in single quote marks (e.g. "'two wo%'; onewo%;").

To exclude a pattern prefix it with NOT. Here is an example of exclusion syntax, "A%; %B; NOT %SYS; NOT 'SYS%'" that
contributes to the following SQL filter: "where (name like A% or name like %B) and (name not like %SYS) and (name not like 'SYS%')"
STRING      
Miscellaneous Using Miscellaneous parameter you can specify the following options:
- 'e': encoding. This value will be used to load text from the specified script files. By default UTF-8 will be used. Here are some other possible values: UTF-16, UTF-16BE, US-ASCII, ISO-8859-1;
Example: -e=ISO-8859-1
- 'p': warehouse path. It is /user/hive/warehouse by default.
Example: -p=/user/hive/warehouse
-s: path to a file that resolves Shell parameters in either Windows (%param%) or in Linux (${param}, $1) format. This parameter can be used to define a path to the key/value pair file. The path can be escaped with double quotes if it contains spaces or any special characters. The records from the file will be used to preprocess all the scripts and replace the corresponding Shell parameters with the actual values. The key literals must not be decorated with the escape characters and the matching rules are case sensitive. Character colon ':' is used as a key/value pair delimiter and must be escaped with backward slash '\' if it is part of the parameter name. For example, for script 'CREATE VIEW %SCHEMA1%.V1 AS SELECT C1 from SCHEMA2.%TABLE2%;' the file with the parameters can be organized in the following way:
SCHEMA1:actual_schema1
TABLE2:actual_table2
Example: -s=J:\MIMB\map_of_shell_parameters.txt
STRING      

 

Bridge Mapping

Mapping information is not available