IBM Cognos Content Manager - Import - 7.3

Talend Data Catalog Bridges

Version
7.3
Language
English
Product
Talend Cloud
Module
Talend Data Catalog
Last publication date
2023-08-17

Bridge Requirements

This bridge:
  • is only supported on Microsoft Windows.

  • requires the tool to be installed to access its SDK.

Bridge Specifications

Vendor IBM
Tool Name Cognos Content Manager
Tool Version RN to C11
Tool Web Site http://www.ibm.com/software/analytics/cognos/
Supported Methodology [Business Intelligence] Multi-Model, Metadata Repository, BI Design (RDBMS Source, OLAP Source, Dimensional Target, Transformation Lineage, Expression Parsing), BI Report (Relational Source, Dimensional Source, Expression Parsing, Report Structure) via Java API
Data Profiling
Incremental Harvesting
Multi-Model Harvesting
Remote Repository Browsing for Model Selection

SPECIFICATIONS
Tool: IBM / Cognos Content Manager version RN to C11 via Java API
See http://www.ibm.com/software/analytics/cognos/
Metadata: [Business Intelligence] Multi-Model, Metadata Repository, BI Design (RDBMS Source, OLAP Source, Dimensional Target, Transformation Lineage, Expression Parsing), BI Report (Relational Source, Dimensional Source, Expression Parsing, Report Structure)
Component: CognosRnRepository version 11.1.0

OVERVIEW
This import bridge supports the following types of objects:
- FrameworkManager models
- DataModules
- Dynamic Cube models
- PowerPlay Transformer models (requires configuration via parameter 'Transformer import configuration')
- QueryStudio queries
- ReportStudio reports
- Exploration reports (Dashboards and Stories)

This import bridge does not support the following types of objects:
- AnalysisStudio reports
- PowerPlay reports

REQUIREMENTS
n/a

FREQUENTLY ASKED QUESTIONS
Q: Why are there multiple versions of a given package extracted from Content Manager?

A: Any given Cognos design model may be edited/updated in Cognos Framework Manager (FM), and then published as a new version of an FM Package in Content Manager (CM). The full Cognos development life cycle process requires one to then migrate any related reports to use this new version of the FM package in CM. If that migration is not completed for all such reports, some reports may still use an old version of a package, some old versions of a package may no longer be used (and therefore should be removed), and /or some new versions of a package may not be used yet by any version. If so, multiple versions of a package might still be used by different reports and thus imported.

Q: How can I extract only the latest version of a package from Content Manager?

A: One may extract only the latest version of a package by selecting a single package in the 'Content' parameter (e.g. '/content/package[@name='GO Sales and Retailers']/model), and by setting the 'Add dependent objects' parameter to 'False'. In such a case, only the latest version of that package will be extracted.

In addition, one may use the FM Package import bridge, which imports only one package at a time. In this case, only the latest version of that package will be imported.

Q: If the import log shows warnings 'Could not get model reference for Report XXX', what does it mean?

A: It is possible that repository report metadata may not have a valid reference to the report's or query's model.
It may help to open that report or query in Report or Query Studio and simply re-save it without making any changes.
That will update the metadata references in the repository. After re-saving, try to import the report or query in question again.
It is also possible that the model which the report or query is based upon is no longer accessible (i.e. got deleted, renamed etc.).
In this case try to have the report or the query fixed to refer to the correct model.

Q: What Cognos Java libraries are necessary at runtime?

A: The import bridge uses Java libraries located in folder: MIMB_HOME/java/CognosRepository
Starting with Cognos version 10.2.x, the Java libraries are no longer mixed-version compatible: they must exactly match the Cognos server version.
In case the default provided libraries do not work for your particular version of Cognos server, you should replace them:

- Download the 'IBM Cognos Software Development Kit' software package from the IBM web site, and make sure to select the exact version matching your server (e.g. 11.0.5)
- Install the Cognos SDK software package on the machine running the import bridge
- Locate the Java libraries in the SDK, for example: /ibm/cognos/sdk/sdk/java/lib
- Configure the import bridge parameter 'Cognos SDK directory' to point to this directory path.
Starting with Cognos version 11.1, the Software Development Kit is now installed by default with Cognos Analytics server.

LIMITATIONS
Refer to the current general known limitations at http://metaintegration.com/Products/MIMB/MIMBKnownLimitations.html or bundled in Documentation/ReadMe/MIMBKnownLimitations.html
In order to access the Cognos Content Manager its web services must be fully operational, which may involve networking proxy and firewall setup. Make sure you first try connecting to 'http://localhost/c8/cm_tester.htm', or 'http://localhost:9300/p2pd/cm_tester.htm' in Cognos 10.2, where localhost is substituted with the appropriate IP name and port, a utility provided by Cognos before attempting any use of this import bridge.
Please read the 'Dispatcher URL' parameter documentation for more details, and contact your Cognos administrator or Cognos support, if necessary. This import bridge will not work if the cm_tester is not fully operational.

Configuration steps for using the cm_tester tool:
- Start a web browser (such as Firefox or Chrome) to connect to the tester URL
- Click on Options button to configure the Content Manager URL and use the Test button to verify it can connect.
- Click on Logon button to enter your login credentials, e.g Namespace=CognosEx Username=admin Password=*****
- Select the "Content Manager Service v1" Namespace
- Select the request template "queryMultiple"
- Edit the SOAP request text to replace the query search path #PATH# with a value like: /content/folder[@name='Samples']//*
- Edit the SOAP request text to specify what properties to return: #PROPERTY1# => defaultName #PROPERTY2# => searchPath
- Edit the SOAP request text to specify what properties to sort by: #SORT_BY# => defaultName
- Click on the send button

You may also use the WSIL mashup service endpoint to browse the ReportStudio Reports available in ContentManager: <Gateway URI>/rds/wsil
For version 8.x: http://localhost:9300/ibmcognos/bi/v1/disp/rds/wsil
For version 11.x: http://localhost:9300/bi/v1/disp/rds/wsil
https://www.ibm.com/docs/en/cognos-analytics/11.1.0?topic=reports-identifying-programmatically

SUPPORT
Provide a trouble shooting package with:
- the debug log (can be set in the UI or in conf/conf.properties with MIR_LOG_LEVEL=6)
- the metadata backup if available (can be set in the Miscellaneous parameter with -backup option, although this common option is not implemented on all bridges for technical reasons).

Q: How do I provide IBM Cognos metadata to support team to reproduce an issue?
There are two backup methods available, the lengthy method described is valid and may be used when specified by your support team.

This bridge saves reports and models under the local MIMB cache, which provides sufficient amount of metadata for support purposes. Invoke -backup under the Miscellaneous parameter and provide a directory to where the bridge will save all pertinent metadata.

A: You need to export your metadata from IBM Cognos 8.4 or IBM Cognos 10 server into a ZIP archive following these steps:
1) Connect to the IBM Cognos Connection using Web browser.
2) Click 'Launch'->'IBM Cognos Administration'.
3) Click 'Configuration'.
4) Click 'Content Administration'.
5) Click on the 'New Export' icon. The New Export wizard appears.
6) Type a unique name and an optional description and screen tip for the deployment specification. Select the folder where you want to save it and click Next.
7) Choose whether to export the entire content store or to do a partial export of specific folders and directory content.
Note: please avoid exporting the whole content store as generally it is not needed, and because restoring this type of export may erase the destination server content store.
To export specific folders and directory content, click Select public folders and directory content, and then click Next.
8) In the Select the 'Public folders' content page, click Add.
9) In the Select entries page, in the 'Available Entries' box, select the packages or folders that you want to export.
You can browse the 'Public Folders' hierarchy and choose the packages and folders you want.
Click the right arrow button to move the selected items to the 'Selected entries' box, and click 'OK'.
10) Under 'Options', select whether you want to include the report output versions (not required), run history (not required), and schedules (not required) and what to do with entries in case of a conflict. Click 'Next'.
11) In the 'Select the directory content' page, select whether you want to export Cognos groups and roles, distribution lists and contacts, and data sources and connections and what to do with the entries in case of a conflict. It is not required to export Cognos groups and roles and distrtibution lists and contacts.
12) Click Next.
13) In the 'Specify the general options' page, select whether to include access permissions and references to namespaces other than IBM Cognos, and who should own the entries after they are imported in the target environment.
It is highly recommended NOT to include access permissions and references to namespaces other than IBM Cognos.
It is also recommended that Entry ownership is set to the user performing the import.
14) Specify the Recording Level for the deployment history. Default is Basic. 'Trace' saves all deployment details but requires the most memory. Click Next
15) In the Specify a deployment archive page, under Deployment archive, select an existing deployment archive from the list, or type a new name to create one.
If you are typing a new name for the deployment archive, IBM Cognos recommends that you do not use spaces in the name.
If the name of the new deployment specification matches the name of an existing deployment archive, the characters _1, _2, _3, etc. are added to the end of the name.
16) Click Next.
17) Review the summary information and click 'Next'. If you want to change information, click 'Back' and follow the instructions.
18) Once you are ready to export the archive, select the action you want:
To run now or later, click Save and run once and click Finish. Specify the time and date for the run. Then click Run. Review the run time and click OK.
To schedule at a recurring time, click Save and schedule and click Finish. Then, select frequency and start and end dates. Then click OK.
To save without scheduling or running, click Save only, and then click Finish.
19) After you run the export, send the produced ZIP archive to support team. The export can result in a single ZIP file or a multi-volume ZIP archive
The exported archive is usually located on the server in 'C:\Program Files\cognos\c8\deployment' directory
Please read IBM Cognos documentation for more details about exporting metadata from your version of Cognos.


Bridge Parameters

Parameter Name Description Type Values Default Scope
Version Select the version of Cognos server you want to import from. ENUMERATED
Auto detect
Cognos 11.x
Cognos 10.x
Cognos 8.4
Cognos 8.3
Cognos 8.1 to 8.2
Cognos ReportNet 1.x
Auto detect  
Dispatcher URL Enter the URI used by the Framework Manager, Metrics Designer or SDK to send requests to Cognos.

This URI typically corresponds to the External dispatcher URI of one of the dispatchers in your installation. It must use the real network host name or IP address instead of localhost. If Framework Manager, Metrics Designer or SDK clients connect to Cognos through an intermediary like a load balancer or proxy, specify the host and port of the intermediary. Cognos must be able to locate a gateway or dispatcher running on a Web server that supports chunking and attachments to handle large volumes of data. If there is no firewall between users and Cognos, components use the default setting. If there is a firewall, you must have access to at least one Web server that supports chunking outside of the firewall. The http or https protocol prefix indicates if SSL is required. You can find the real value in the Cognos installation directory in configuration\cogstartup.xml file.

Example:
<crn:parameter name='sdk'>
<crn:value xsi:type='xsd:anyURI'>http://localhost:9300/p2pd/servlet/dispatch</crn:value>
</crn:parameter>

To test the connection please first connect to the Cognos Content Manager via a Web browser to make sure it is accessible from the client machine. If Cognos Content Manager is running and accessible, you will see a status page. The state of the server must be 'running'. Example of the test URL: http://localhost:9300/p2pd/servlet

To test if your authentication parameters work use the Web client based tool from Cognos which allows one to verify the connection and authentication availability. Sample URL is:
for version 8.x: http://localhost/c8/cm_tester.htm
for version 10.x: http://localhost:9300/p2pd/cm_tester.htm
for version 11.x: http://localhost:9300/p2pd/cm_tester.htm

Accessing Cognos via SSL/HTTPS.

In order to connect to Cognos via SSL you need to import SSL certificate from Cognos server onto the machine where MIMB is run.
The procedure below provides the necessary steps.
1. Export SSL certificate from Cognos server machine using ThirdPartyCertificateTool.
On the Cognos server machine, go to <cognos_install>/bin directory and find ThirdPartyCertificateTool.bat on Windows machine, or ThirdPartyCertificateTool.sh on UNIX.
<cognos_install> is the directory where Cognos server is installed:
For Cognos 10 it may be: C:\Program Files\ibm\cognos\c10
For Cognos 11 it may be: C:\Program Files\ibm\cognos\analytics
The examples below are for Windows machine, for UNIX use equivalent commands.
2. Run the tool to export certificate with the following command line:
ThirdPartyCertificateTool.bat -E -T -r cogcert.cer -k "<cognos_install>\configuration\signkeypair\jCAKeystore" -p password
The Keystore file is password protected, and its location may depend on your version:
For Cognos 10 it may be: <cognos_install>\configuration\signkeypair\jCAKeystore
For Cognos 11 it may be: <cognos_install>\configuration\certs\CAMKeystore
As value for the password provide the password which is defined in Cognos Configuration for 'Certificate Authority key store password'.
If you have not changed this value, the default is "NoPassWordSet" (without quotes).
If you can't remember the password you can do an export of your configuration (described in KB 1030350) and open the exported configuration file in an editor.
Search for 'certificateAuthorityKeyFilePassword' and you will find the value for this password.
The cogcert.cer will contain the certificate we need. So copy it over to the machine where MIMB is run.
3. On the MIMB machine, using the java keytool, create a private keystore by importing the certificate exported above:
keytool -importcert -file cogcert.cer -alias Cognos10 -keystore "${MODEL_BRIDGE_HOME}\jre\lib\security\cognos" -storepass cognos
When you are asked if you want to trust this certificate, confirm with 'yes'.
The new keystore will be created under "${MODEL_BRIDGE_HOME}\jre\lib\security" and named cognos.
You can import multiple certificates into that same keystore (e.g. from multiple servers), and even from different versions of Cognos.
Just give each new certificate a unique alias within that keystore.
4. Now you should be able to run MIMB and connect to Cognos via SSL.

Testing HTTPS/SSL connections.

Connectivity to Cognos BI server can be tested with GUI-based Content Manager Browser diagnostic tool.
The tool is available for download from IBM site free of charge:
https://www-304.ibm.com/connections/blogs/basupportlink/entry/ibm_cognos_bi_content_manager_browser_diagnostic_tool2?lang=en_us
The tool provides detailed information about all objects and properties in the Content Manager database and can work over both secure and non-secure HTTP connections.

How to enable HTTPS/SSL in CM Browser.
a. Download the CM browser archive according to your machine version (32bit or 64bit) and extract it into a separate directory (for example, you can name the directory CM_Browser).
b. In the CM_Browser directory find the file that reads like this: IBMCognosBI_CMBrowser.ini
c. Inside that file, below any existing lines, add 2 properties as follows - place each property on a new line:
-Djavax.net.ssl.trustStore=<mimb_home>jre\lib\security\cognos
-Djavax.net.ssl.trustStorePassword=cognos

See p.3 in the above description regarding the creation of the client certificate store.
The first added property should simply point to that store.
d. Run IBMCognosBI_CMBrowser.exe, click on the first yellow key icon and set up connection parameters for your Cognos server:
Content Manager URL, Namespace, username and password.
e. When you are done press Login button to connect to the server.
If login is successful you will be able to browse the contents of the Content Manager database.
If login failed correct any errors reported by the tool and try again.

MIMB uses a similar mechanism to enable HTTPS/SSL connectivity, so if the CM Browser connection succeeds then MIMB will likely to be able to connect as well.
STRING   http://localhost:9300/p2pd/servlet/dispatch Mandatory
Namespace A namespace defines a collection of user accounts from an authentication provider. See 'Authentication Providers' in the Cognos ReportNet Installation and Configuration Guide. Leave this parameter blank if Cognos authentication has not been configured. STRING      
User Enter the username which the import bridge will use to log in. Be sure this user name has permissions to the objects you wish to import. Leave blank if Cognos authentication has not been configured.

This import import bridge is warrantied to be read only and will never affect the IBM Cognos contents. It is therefore safe to attempt the initial metadata harvesting as 'Administrator' in order to ensure that the entire IBM Cognos content is extracted without any access permission issue. Eventually, the administrator can set up a 'read only' user or group as defined below.

IBM Cognos has five types of permissions (Read, Execute, Traverse, Write and Set Policy) which may be assigned or restricted for a given user, group or role. Of these, it is necessary to assure that the userid has the three permissions (Read, Execute and Traverse) assigned for all entries (folders, reports, queries, analysis, packages, connections,etc.) which are included in the import. Such permissions are indeed 'read only' and will not make any changes to the Cognos contents. Remember, many entries depend upon others, e.g., packages use connections, reports use packages, and so on.

Note, the common recommendation in the IBM Cognos documentation (Execute and Traverse) is not sufficient.

Data sources in IBM Cognos can be secured against multiple namespaces. In some environments, the namespace used to secure the data source is not the primary namespace used for access to IBM Cognos Connection. When the import bridge tries to access an entry, such as a report, a query, or an analysis, that is associated with a data source secured against multiple namespaces, you must have specified a userid which has permissions for the required primary namespace.

Refer to the IBM Cognos documentation on permissions and security for more details.
STRING      
Password Enter the password associated with the username which the import bridge will use to log in. Leave blank if Cognos authentication has not been configured. PASSWORD      
Content browsing mode Specifies what will be retrieved when browsing for available content in the Cognos repository.
'Packages only'
The tree of Packages and Folders is retrieved. No reports are retrieved.

'Connections only'
The list of Connections is retrieved.

'All'
The tree of Packages, Folders, Queries and Reports is retrieved. This mode requires more time to complete on large repositories.
ENUMERATED
Packages only
Connections only
All
All  
Personal folders Specify whether the personal folders, and the models and reports located in them should be browsed and imported.
Retrieving metadata from personal folders may be slower on some servers.
BOOLEAN
False
True
True  
Content Allows to reduce import scope to a set of objects smaller than the whole server content. The scope can be controlled by object type (e.g. Report) and location (e.g. Folder). The content string is a semicolon-separated list of individual Cognos search paths used to retrieve objects from Cognos. See the Cognos Documentation for full search path syntax.
Note that search paths that are trying to retrieve everything under a certain folder or even content root, are inefficient and may run for long time or even cause errors on Cognos server. It is recommended to use more specific search paths, like "//*[@objectClass='query' or @objectClass='report' or @objectClass='model']" instead of "//*".
Note that all path items in the list have all ';' and '\' characters replaced with '\;' and '\\' respectively to make sure that these special characters will not interfere with the list parsing.

Models may be retrieved using their package name. In case of multiple published versions the latest will be imported (e.g. '/content/package[@name='GO Sales and Retailers']/model').

Reports may be retrieved using their complete search path. A report path can be found using the 'View the search path' link on the report properties page.
Useful multi-query examples:
'//report' all reports
'/content/package[@name='GO Sales and Retailers']//report' all reports in package
REPOSITORY_SUBSET     Mandatory
Add dependent objects Add dependent objects to the initial selection of Cognos objects defined in Content.
'None'
Only the selected Cognos objects are imported.

'Packages referenced by selected reports'
When a report is selected, its source package is imported.

'All'
When a report is selected, its source package is imported; and when a package is selected, its dependent reports are imported. Note that this requires a complete scan of reports dependencies on the Cognos server.
ENUMERATED
None
Packages referenced by selected reports
All
Packages referenced by selected reports  
Incremental import Incremental import only extracts what has changed since the last import. The initial full metadata harvesting (model import) of a very large source system can take a long time. However the extracted metadata are organized as a multi-model, where each model is a unit of change (e.g. Schema of a RDBMS server, or report of BI server). Subsequent model imports are dramatically faster than the initial import as this import bridge will automatically try to detect changes in the source system, in order to only process the modified, added or deleted models and reuse all unchanged metadata from the model cache. Note however that the detection of change is more or less efficient depending on the sources system: e.g. BI servers can quickly provide the list of new, modified or deleted reports, but not all data stores offer a schema level change detection.

'True'
Import only the changes made since the last import

'False'
import all metadata. This option is required after upgrading the import bridge in particular to take full advantage of any additional metadata coverage.

For debugging purpose, the option -cache.clear of the Miscellaneous parameter can be used to clear one model from the cache which is located (by default) in: $HOME/data/MIMB/cache/<BridgeId>/<ModelId>
BOOLEAN
False
True
True  
Folder representation Specify how the folders from Cognos Framework Manager should be represented.
'Ignore'
The folders are ignored - this is the default selection.

'Flat'
The folders are represented as Diagrams. Their hierarchy is not preserved.

'Hierarchical'
The folders are represented as Diagrams and their hierarchy is preserved.
ENUMERATED
Ignore
Flat
Hierarchical
Ignore  
Transformer import configuration XML file that describes mappings between Cognos Content Manager data sources and PowerPlay Transformer models.
Multiple Content Manager data sources may refer to the same PowerCube which is generated from a single Transformer model.
The import bridge assumes 1:1 mapping between a PowerCube and the Transformer model.
Each <Model> element corresponds to a single Transformer model (.mdl or pyj) file and lists all Content Manager data sources that refer to that model's PowerCube.
Optionally it may list Impromptu Query Definition data sources (<iqd> child elements) that require specific database type other than the default.
The configuration file may have multiple <Model> elements.

XML format example:

<ImportConfiguration database="Teradata" dbVersion="1.0.0">
<!-- database: specifies default database for Impromptu Query Definition (IQD) SQL statements-->
<!-- dbVersion format: major version.minor version.release-->

<Model path="some directory\some model.mdl">
<!--Transformer model (.mdl or .pyj) -->
<cmDataSource name="some Cognos datasource name" />
<!-- List IQD data sources for databases other than default -->
<iqd name="Customers" database="Oracle" dbVersion="11.1.0"/>
<iqd name="Products" database="MS SQL Server" dbVersion="8.0.0"/>
</Model>

</ImportConfiguration>
FILE *.xml    
Macro values file File defining list of macro replacement values:
macro1=value1
macro2=value2
...
macroN=valueN

For example:
$machine=localhost
$runLocale=en
sq($runLocale)='en'
dq('Column ' + $runLocale)="Column en"
$Language_lookup{$runLocale}=EN
prompt('CountryName')=France
'[NAMESPACE].[QUERYSUBJECT].[QUERYITEM_'+$Language_lookup{$runLocale}+']'=[NAMESPACE].[QUERYSUBJECT].[QUERYITEM_EN]

FrameworkManager models may use macros to parameterize SQL statements.
The macros may contain dynamic prompts, which are only defined at runtime.
In such cases, the import bridge will print warnings in the log that it could not determine the value of a macro and will simply leave that macro without any substitution in the resulting model.
In order to determine the correct macro substitution values, the import bridge reads a macro values file with the macro and the correct value to substitute.
FILE *.*    
Cognos SDK directory Specify here the directory location of the Cognos SDK where to load Java libraries from.
For example:
C:\Program Files\ibm\cognos\sdk\sdk\java\lib
DIRECTORY      
Multiple threads Number of worker threads to harvest metadata asynchronously.

- Leave the parameter blank to have the import bridge compute the value, between 1 and 6, based on JVM architecture and number of available CPU cores.

- Specify a numeric value greater or equal to 1 to provide the actual number of threads.
If the value specified is invalid, a warning will be issued and 1 will be used instead.
If you experience out of memory conditions when harvesting metadata asynchronously, experiment with smaller numbers.
If your machine has a lot of available memory (e.g. 10 Gb or more), you can try larger numbers when harvesting many documents at once.
Note that setting the number too high can actually decrease the performance due to resource contention.
NUMERIC      
Miscellaneous INTRODUCTION
Specify miscellaneous options starting with a dash and optionally followed by parameters, e.g.
-connection.cast MyDatabase1="MICROSOFT SQL SERVER"
Some options can be used multiple times if applicable, e.g.
-connection.rename NewConnection1=OldConnection1 -connection.rename NewConnection2=OldConnection2;
As the list of options can become a long string, it is possible to load it from a file which must be located in ${MODEL_BRIDGE_HOME}\data\MIMB\parameters and have the extension .txt. In such case, all options must be defined within that file as the only value of this parameter, e.g.
ETL/Miscellaneous.txt

JAVA ENVIRONMENT OPTIONS
-java.memory <Java Memory's maximum size> (previously -m)

1G by default on 64bits JRE or as set in conf/conf.properties, e.g.
-java.memory 8G
-java.memory 8000M

-java.parameters <Java Runtime Environment command line options> (previously -j)

This option must be the last one in the Miscellaneous parameter as all the text after -java.parameters is passed "as is" to the JRE, e.g.
-java.parameters -Dname=value -Xms1G
The following option must be set when a proxy is used to access internet (this is critical to access https://repo.maven.apache.org/maven2/ and exceptionally a few other tool sites) in order to download the necessary third-party software libraries.
Note: The majority of proxies are concerned with encrypting (HTTPS) the outside (of the company) traffic and trust the inside traffic that can access proxy over HTTP. In this case, an HTTPS request reaches the proxy over HTTP where the proxy HTTPS-encrypts it.
-java.parameters -java.parameters -Dhttp.proxyHost=127.0.0.1 -Dhttp.proxyPort=3128 -Dhttp.proxyUser=user -Dhttp.proxyPassword=pass

-java.executable <Java Runtime Environment full path name> (previously -jre)

It can be an absolute path to javaw.exe on Windows or a link/script path on Linux, e.g.
-java.executable "c:\Program Files\Java\jre\bin\javaw.exe"

Some API based bridges (e.g. JDBC) may require a SSL / TLS based secure connection, no setup is needed when using an official certificate signed by a Certificate Authority (CA). However, when using a self signed certificate, then such a certificate needs to be imported in your java environment (before restarting your Java application) typically as follows:
cd $JAVA_HOME/jre/lib/security
mv jssecacerts jssecacerts.old
$JAVA_HOME/bin/keytool -importkeystore -srckeystore {your_keystore} -keystore jssecacerts

-environment.variable <name>=<value> (previously -v)

None by default, e.g.
-environment.variable var2="value2 with spaces"

MODEL IMPORT OPTIONS
-model.name <model name>

Override the model name, e.g.
-model.name "My Model Name"

-prescript <script name>

This option allows running a script before the bridge execution.
The script must be located in the bin directory (or as specified with M_SCRIPT_PATH in conf/conf.properties), and have .bat or .sh extension.
The script path must not include any parent directory symbol (..).
The script should return exit code 0 to indicate success, or another value to indicate failure.
For example:
-prescript "script.bat arg1 arg2"

-postscript <script name>

This option allows running a script after successful execution of the bridge.
The script must be located in the bin directory (or as specified with M_SCRIPT_PATH in conf/conf.properties), and have .bat or .sh extension.
The script path must not include any parent directory symbol (..).
The script should return exit code 0 to indicate success, or another value to indicate failure.
For example:
-postscript "script.bat arg1 arg2"

-cache.clear

Clears the cache before the import, and therefore will run a full import without incremental harvesting.

If the model was not changed and the -cache.clear parameter is not used (incremental harvesting), then a new version will not be created.
If the model was not changed and the -cache.clear parameter is set (full source import instead of incremental), then a new version will be created.

-backup <directory>

This option allows to save the bridge input metadata for further troubleshooting. The provided <directory> must be empty.

The primary use of this option is for data store import bridges, in particular JDBC based database import bridges.

Note that this option is not operational on some bridges including:
- File based import bridges (as such input files can be used instead)
- DI/BI repository import bridges (as the tool's repository native backup can be used instead)
- Some API based import bridges (e.g. COM based) for technical reasons.

DATA CONNECTION OPTIONS
Data Connections are produced by the import bridges typically from ETL/DI and BI tools to refer to the source and target data stores they use. These data connections are then used by metadata management tools to connect them (metadata stitching) to their actual data stores (e.g. databases, file system, etc.) in order to produce the full end to end data flow lineage and impact analysis. The name of each data connection is unique by import model. The data connection names used within DI/BI design tools are used when possible, otherwise connection names are generated to be short but meaningful such as the database / schema name, the file system path, or Uniform Resource Identifier (URI). The following options allows to manipulate connections. These options replaces the legacy options -c, -cd, and -cs.

-connection.cast ConnectionName=ConnectionType

Casts a generic database connection (e.g. ODBC/JDBC) to a precise database type (e.g. ORACLE) for SQL Parsing, e.g.
-connection.cast "My Database"="MICROSOFT SQL SERVER".
The list of supported data store connection types includes:
ACCESS
APACHE CASSANDRA
DB2/UDB
DENODO
GOOGLE BIGQUERY
HIVE
MYSQL
NETEZZA
ORACLE
POSTGRESQL
PRESTO
REDSHIFT
SALESFORCE
SAP HANA
SNOWFLAKE
MICROSOFT SQL AZURE
MICROSOFT SQL SERVER
SYBASE SQL SERVER
SYBASE AS ENTERPRISE
TERADATA
VECTORWISE
HP VERTICA

-connection.rename OldConnection=NewConnection

Renames an existing connection to a new name, e.g.
-connection.rename OldConnectionName=NewConnectionName
Multiple existing database connections can be renamed and merged into one new database connection, e.g.
-connection.rename MySchema1=MyDatabase -connection.rename MySchema2=MyDatabase

-connection.split oldConnection.Schema1=newConnection

Splits a database connection into one or multiple database connections.
A single database connection can be split into one connection per schema, e.g.
-connection.split MyDatabase
All database connections can be split into one connection per schema, e.g.
-connection.split *
A database connection can be explicitly split creating a new database connection by appending a schema name to a database, e.g.
-connection.split MyDatabase.schema1=MySchema1

-connection.map SourcePath=DestinationPath

Maps a source path to destination path. This is useful for file system connections when different paths points to the same object (directory or file).
On Hadoop, a process can write into a CSV file specified with the HDFS full path, but another process reads from a HIVE table implemented (external) by the same file specified using a relative path with default file name and extension, e.g.
-connection.map /user1/folder=hdfs://host:8020/users/user1/folder/file.csv
On Linux, a given directory (or file) like /data can be referred to by multiple symbolic links like /users/john and /users/paul, e.g.
-connection.map /data=/users/John -connection.map /data=/users/paul
On Windows, a given directory like C:\data can be referred to by multiple network drives like M: and N:, e.g.
-connection.map C:\data=M:\ -connection.map C:\data=N:\

-connection.casesensitive ConnectionName

Overrides the default case insensitive matching rules for the object identifiers inside the specified connection, provided the detected type of the data store by itself supports this configuration (e.g. Microsoft SQL Server, MySql etc.), e.g.
-connection.casesensitive "My Database"

-connection.level AggregationLevel

Specifies the aggregation level for the external connections, e.g.-connection.level catalog
The list of the supported values:
server
catalog
schema (default)

IBM COGNOS CONTENT MANAGER OPTIONS
-cognos.removeReportPages (previously -r)

Remove the report pages and their graphical structure.

-cognos.useReportSpecificationName

Use the Cognos report specification name instead of the Cognos content manager name.

-cognos.skipFoldersQuery

Avoid querying the list of Folders.
STRING      

 

Bridge Mapping

Meta Integration Repository (MIR)
Metamodel
(based on the OMG CWM standard)
"IBM Cognos Content Manager"
Metamodel
Cognos BI Reporting (Repository)
Mapping Comments
     
DirectoryStructureModel Repository  
CreationTime Creation Time  
Description Description  
ImportDate Import Date  
Name Name  
NativeId Native Id  
NativeType Native Type  
Folder Dynamic Cube, User Account, Package, Folder  
Author Author  
CreationTime Creation Time  
Description Description  
LastModificationTime Last Modification Time  
Modifier Modifier  
Name Name  
NativeId Native Id  
NativeType Native Type  
StoreContent Transformer, Query, DataSet, Report, Connection, Data module, Model, Dashboard, Active Report  
Author Author  
CreationTime Creation Time  
Description Description  
LastModificationTime Last Modification Time  
Modifier Modifier  
Name Name  
NativeId Native Id  
NativeType Native Type