- In the Datasets view of the Talend Data Preparation homepage, click the white arrow next to the Add Dataset button.
- Select From HDFS.
The Add an HDFS dataset form opens.
- In the Dataset name field, enter the name you want to give your dataset.
- In the User name field enter your Linux user name.
This user must have the reading rights on the file that you want to import.
- To enable Kerberos authentication, select the Use
Kerberos check box.
- In the Principal field enter the name of the service principal.
- In the Keytab file field, enter the location of your
The keytab file must be accessible by the Spark Job Server.
You can manually configure Talend Data Preparation to display a default value in those fields.
- In the Format field, select the format that corresponds
to the file that you want to import.
For HDFS files, Talend Data Preparation supports CSV, AVRO and PARQUET.
If you choose CSV, select the record delimiter and field delimiter used for the file you want to import.
- In the Path field, enter the complete URL of your file in the Hadoop cluster.
- Click the Add Dataset button.
The data extracted from the cluster directly opens in the grid and you can start working on your preparation.
The data is still stored in the cluster and doesn't leave it, Talend Data Preparation only retrieves a sample on-demand.
Your dataset is now available in the Datasets view of the application home page.