Skip to main content Skip to complementary content

Adding a new database type

Talend Data Preparation allows a direct connection to various types of databases. You can use them as source to create new datasets.

You can manually enrich the list of databases from which you can import data.

The list of available database types for dataset creation actually depends on the JDBC drivers that you have stored in the <components_catalog_path>/.m2 folder.

Let's say that you have some customer data stored on an Oracle database, and you want to import it in Talend Data Preparation to perform cleansing operations. You will add a JDBC driver .jar file specific to Oracle databases to the Components Catalog folder structure to add this new source of data in the Talend Data Preparation interface.

In a Big Data context, if you want to run preparations made on data from your Oracle database, on the Hadoop cluster, the same driver must be added to the Spark Job Server folder structure.

You do not need to stop or restart any of the services to complete the following procedure.


  1. Download the latest Oracle JDBC driver called ojdbc8- from MVN repository website.
  2. Create the <components_catalog_path>/.m2/com/oracle/database/jdbc/ojdbc8/ folder.
  3. Copy the ojdbc8- in the newly created folder.
  4. Update the <components_catalog_path>/config/jdbc_config.json file by adding the following lines:
    		"id" : "ORACLE",
    		"class" : "oracle.jdbc.OracleDriver",
    		"url" : "jdbc:oracle:thin:@//<server ip>:<server port>/<database>",
    		"paths" :
    			{"path" : ""}


    • id is the value that will be displayed in the Talend Data Preparation interface as Database type.
    • class is the driver class used to communicate with the database.
    • url is the URL template to access a database.
    • path is the path of your database, it is identical to the path mentioned in step 2.
  5. To enable export on the Hadoop cluster for the new dataset type, copy the ojdbc8- file to the <spark_job_server_path>/datastreams-deps/ folder.
  6. Copy the changes made in the <components_catalog_path>/config/jdbc_config.json file, and paste them into the <spark_job_server_path>/jdbc_config.json file.


The Oracle database is now available in the database type drop-down list in the import form.

When exporting a preparation made on data stored on your Oracle database, you can choose to process the data on the Talend Data Preparation server, or a Hadoop Cluster if you are using Big Data.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – let us know how we can improve!