Centralizing HDFS metadata - 6.4

Talend Open Studio for Big Data User Guide

EnrichVersion
6.4
EnrichProdName
Talend Open Studio for Big Data
task
Design and Development
EnrichPlatform
Talend Studio

If you often need to use a file schema from HDFS, the Hadoop Distributed File System, then you may want to centralize the connection information to the HDFS and the schema details in the Metadata folder in the Repository tree view.

Prerequisites:

  • Launch the Hadoop distribution you need to use and ensure that you have the proper access permission to that distribution and its HDFS.

  • Create the connection to that Hadoop distribution from the Hadoop cluster node. For further information, see Centralizing a Hadoop connection.

Creating a connection to HDFS

  1. Expand the Hadoop cluster node under Metadata in the Repository tree view, right-click the Hadoop connection to be used and select Create HDFS from the contextual menu.

  2. In the connection wizard that opens up, fill in the generic properties of the connection you need create, such as Name, Purpose and Description. The Status field is a customized field you can define in File >Edit project properties.

  3. Click Next when completed. The second step requires you to fill in the HDFS connection data. The User name property is automatically pre-filled with the value inherited from the Hadoop connection you selected in the previous steps.

    The Row separator and the Field separator properties are using the default values.

    If the Hadoop connection you are using enables the Kerberos security, the User name field is automatically deactivated.

  4. If the data to be accessed in HDFS includes a header message that you want to ignore, select the Header check box and enter the number of header rows to be skipped.

  5. If you need to define column names for the data to be accessed, select the Set heading row as column names check box. This allows the Studio to select the last one of the skipped rows to use as the column names of the data.

    For example, select this check box and enter 1 in the Header field; then when you retrieve the schema of the data to be used, the first row of the data will be ignored as data body but used as column names of the data.

  6. If you need to use custom HDFS configuration for the Hadoop distribution to be used, click the [...] button next to Hadoop properties to open the corresponding properties table and add the property or properties to be customized. Then at runtime, these changes will override the corresponding default properties used by the Studio for its Hadoop engine.

    Note a Parent Hadoop properties table is displayed above the current properties table you are editing. This parent table is read-only and lists the Hadoop properties that have been defined in the wizard of the parent Hadoop connection on which the current HDFS connection is based.

    For further information about the HDFS-related properties of Hadoop, see Apache's Hadoop documentation on http://hadoop.apache.org/docs/current/, or the documentation of the Hadoop distribution you need to use. For example, the following page lists some of the default HDFS-related Hadoop properties: http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml.

    For further information about how to leverage this properties table, see Setting reusable Hadoop properties.

  7. Change the default separators if necessary and click Check to verify your connection.

    A message pops up to indicate whether the connection is successful.

  8. Click Finish to validate these changes.

    The created HDFS connection is now available under the Hadoop cluster node in the Repository tree view.

    Note

    This Repository view may vary depending the edition of the Studio you are using.

    If you need to use an environmental context to define the parameters of this connection, click the Export as context button to open the corresponding wizard and make the choice from the following options:

    • Create a new repository context: create this environmental context out of the current Hadoop connection, that is to say, the parameters to be set in the wizard are taken as context variables with the values you have given to these parameters.

    • Reuse an existing repository context: use the variables of a given environmental context to configure the current connection.

    If you need to cancel the implementation of the context, click Revert context. Then the values of the context variables being used are directly put in this wizard.

    For a step-by-step example about how to use this Export as context feature, see Exporting metadata as context and reusing context parameters to set up a connection.

  9. Right-click the created connection, and select Retrieve schema from the drop-down list in order to load the desired file schema from the established connection.

Retrieving a file schema

  1. When you click Retrieve Schema, a new wizard opens up where you can filter and display different objects (an Avro file, for example) in the HDFS.

  2. In the Name filter field, you can enter the name of the file(s) you are looking for to filter it/them.

    Otherwise, you can expand the folders listed in this wizard by selecting the check box before them. Then, select the file(s) of which you need to retrieve the schema(s)

    Each time when the schema retrieval is done for a file selected, the Creation status of this file becomes Success.

  3. Click Next to open a new view on the wizard that lists the selected file schema(s). You can select any of them to display its details in the Schema area.

  4. Modify the selected schema if needed. You can change the name of the schema and according to your needs, you can also customize the schema structure in the Schema area.

    Indeed, the tool bar allows you to add, remove or move columns in your schema.

    To overwrite the modifications you made on this selected schema with its default one, click Retrieve schema. Note that this overwriting does not retain any custom edits.

  5. Click Finish to complete the HDFS file schema creation. All the retrieved schemas are displayed under the related HDFS connection node in the Repository view.

    If then you still need to edit a schema, right click this schema under the relevant HDFS connection node in the Repository view and from the contextual menu, select Edit Schema to open this wizard again and then make the modifications.

    Note

    If you modify the schemas, ensure that the data type in the Type column is correctly defined.