Hortonworks Atlas - Import - 7.1

Talend Data Catalog Bridges

author
Talend Documentation Team
EnrichVersion
7.1
EnrichProdName
Talend Big Data Platform
Talend Data Fabric
Talend Data Management Platform
Talend Data Services Platform
Talend MDM Platform
Talend Real-Time Big Data Platform
EnrichPlatform
Talend Data Catalog

Bridge Requirements

This bridge:
  • requires the tool to be installed to access its SDK.

  • requires Internet access to https://repo.maven.apache.org/maven2/ and/or other tool sites to download drivers into <TDC_HOME>/data/download/MIMB/. For more information on how to retrieve third-party drivers when the TDC server cannot access the Internet, see this article.

Bridge Specifications

Vendor Hortonworks
Tool Name Atlas
Tool Version API 2.0
Tool Web Site http://hortonworks.com/apache/atlas/
Supported Methodology [Metadata Management] Multi-Model, Metadata Repository, Data Store (Physical Data Model), ETL (Source and Target Data Stores, Transformation Lineage, Expression Parsing) via REST API
Multi-Model Harvesting
Incremental Harvesting
Data Profiling
Remote Repository Browsing for Model Selection

SPECIFICATIONS
Tool: Hortonworks Atlas version API 2.0 via REST API
See http://hortonworks.com/apache/atlas/
Metadata: [Metadata Management] Multi-Model, Metadata Repository, Data Store (Physical Data Model), ETL (Source and Target Data Stores, Transformation Lineage, Expression Parsing)
Bridge: ApacheAtlasImport.HortonworksAtlas version 11.0.0

DISCLAIMER
This bridge requires internet access to https://repo.maven.apache.org/maven2/ (and exceptionally a few other tool sites)
in order to download the necessary third party software libraries into $HOME/data/download/MIMB/
(such directory can be copied from another MIMB server with internet access).
By running this bridge, you hereby acknowledge responsibility for the license terms and any potential security vulnerabilities from these downloaded third party software libraries.

OVERVIEW
Atlas repositories contain the Hadoop data store definitions of HDFS files and HIVE tables, as well as the data flow lineage of operations between them.
Atlas repositories determine such data flow lineage from the execution log of various technologies (Map/Reduce, Pig, Impala, etc) produced by various applications, ingestion framework /ETL /DI jobs.
The purpose of this Atlas import bridge is NOT to extract the data store definitions, instead use the HIVE and HDFS import bridges for that purpose.
However, the purpose of this Atlas import bridge is to extract the data flow lineage (DI processes) between previously imported HDFS and HIVE data stores.
Note that the dedicated HiveQL or Spark import bridges should be preferred in order to get better detailed feature/column level lineage with correlation to the original scripts.

Please refer to the individual parameter's tool tips for more detailed examples.


Bridge Parameters

Parameter Name Description Type Values Default Scope
URL URL of the metadata server.
f.e http://localhost:21000/
STRING     Mandatory
Login Name of the user account used to connect to metadata server STRING   holger_gov  
Password Password for the user account used to connect to metadata server. PASSWORD   holger_gov  
Entities filter Bridge will import and build lineage using start entities set, filtered by this parameter. The query syntax is exactly the same as in Atlas search ui.
It is alowed to use basic full-text and advansed DSL mode. Entity type will be set to 'DataSet' when full-text mode is used
Some queries examples:
schemaname.tablename
hdfs://quickstart.cloudera:8020/user/hive/warehouse/customers*
hive_table where name='test'

To make import faster, please provide as precise as possible query and check filtered entities count in Atlas ui before import.
STRING     Mandatory
Lineage direction Use this parameter to set lineage extract direction ENUMERATED
Both
Input
Output
BP_LINEAGE_DIRECTION_BOTH  
Kerberos configuration file Path to kerberos krb5 config file (usually krb5.ini or krb5.conf with correct configuration inside)
Example: /etc/krb5/krb5.conf
STRING      
Kerberos login.conf file Path to login.conf with correct keytab file path and principal name. F.e:
client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab=/path/to/userKeytab
principal='userName';
};
STRING      
Miscellaneous Specify miscellaneous options identified with a -option followed by a value if required:

GENERAL OPTIONS
-m <Java Memory's maximum size>
1G by default on 64bits JRE or as set in conf/conf.properties, e.g.
-m 8G
-m 2500M

-j <Java Runtime Environment command line options>
This option must be the last one in the Miscellaneous parameter as all the text after -j is passed "as is" to the JRE, e.g.
-j -Dname=value -Xms1G

-jre <Java Runtime Environment full path name>
It can be an absolute path to javaw.exe on Windows or a link/script path on Linux, e.g.
-jre "c:\Program Files\Java\jre1.8.0_211\bin\javaw.exe"

-v <Environment variable value>
None by default, e.g.
-v var1=value1 -v var2="value2 with spaces"

-model.name <model name>
Override the model name, e.g.
-model.name "My Model Name"

-prescript <script name>
The script must be located in the bin directory, and have .bat or .sh extension.
The script path must not include any parent directory symbol (..).
The script should return exit code 0 to indicate success, or another value to indicate failure.
For example:
-prescript \"script.bat\"

-backup <directory>
Full path of an empty directory to save the metadata input files for further troubleshooting.

REPOSTORY OPTIONS
-restore.dir <backup directory path>
Specify the backup directory to be restorted, e.g.
-restore.dir C:\test\restoreDir

-connection.timeout <number of seconds>
Specify a connection time out (by default 20), e.g:
-connection.timeout 20

-threads.count <number>
Sets the number of threads to use while performing requests to api, e.g.
-threads.count 5

-cs <comma separated list of connections>
Creates Schemas per Connection, e.g.
-cs * (create a separate connection for each schema in each connection)
-cs c1, c2 (create separate connections for each schema in connections c1 and c2)
-cs app1=c.s1 (create a connection named app1 for the schema s1 in the connection c)

ATLAS OPTIONS
-entities.limit <max number of entities>
Specify the maximum number of entities to fetch per one request, e.g.
-entities.limit 1000

-hive.support.quoted.identifiers <identifiers>
Default is none, sets this option to enable regex-based column specification, e.g.
-hive.support.quoted.identifiers none

-ignore.deleted
Use this option to disable loading of deleted entities
STRING      

 

Bridge Mapping

Mapping information is not available