This scenario uses a four-component Job to join the columns selected from two Hive tables and write them into another Hive table.
Create the Hive table you want to write data in. In this scenario, this table is named as agg_result, and you can create it using the following statement in tHiveRow:
create table agg_result (id int, name string, address string, sum1 string, postal string, state string, capital string, mostpopulouscity string) partitioned by (type string) row format delimited fields terminated by ';' location '/user/ychen/hive/table/agg_result'
In this statement, '/user/ychen/hive/table/agg_result' is the directory used in this scenario to store this created table in HDFS. You need to replace it with the directory you want to use in your environment.
For further information about tHiveRow, see tHiveRow.
Create two input Hive tables containing the columns you want to join and aggregate these columns into the output Hive table, agg_result. The statements to be used are:
create table customer (id int, name string, address string, idState int, id2 int, regTime string, registerTime string, sum1 string, sum2 string) row format delimited fields terminated by ';' location '/user/ychen/hive/table/customer'
create table state_city (id int, postal string, state string, capital int, mostpopulouscity string) row format delimited fields terminated by ';' location '/user/ychen/hive/table/state_city'
Use tHiveRow to load data into the two input tables, customer and state_city. The statements to be used are:
"LOAD DATA LOCAL INPATH 'C:/tmp/customer.csv' OVERWRITE INTO TABLE customer"
"LOAD DATA LOCAL INPATH 'C:/tmp/State_City.csv' OVERWRITE INTO TABLE state_city"
The two files, customer.csv and State_City.csv, are two local files we created for this scenario. You need to create your own files to provide data to the input Hive tables. The data schema of each file should be identical with their corresponding table.
For further information about the Hive query language, see https://cwiki.apache.org/confluence/display/Hive/LanguageManual.
In the Integration perspective of Talend Studio, create an empty Job from the Job Designs node in the Repository tree view.
For further information about how to create a Job, see Talend Studio User Guide.
Drop two tELTHiveInput components and tELTHiveMap and tELTHiveOutput onto the workspace.
Connect them using the Row > Main link.
Each time when you connect two components, a wizard pops up to prompt you to name the link you are creating. This name must be the same as that of the Hive table you want the active component to process. In this scenario, the input tables the two tELTHiveInput components will handle are customer and state_city and the output table tELTHiveOutput will handle is agg_result.
Double-click the tELTHiveInput component using the customer link to open its Component view.
Click the [...] button next to Edit schema to open the schema editor.
Click the button as many times as required to add columns and rename them to replicate the schema of the customer table we created earlier in Hive.
To set up this schema, you can as well use the customer schema you retrieve and store in the Repository. For further information about how to set up a connection to Hive and retrieve and store the schema in Repository, see Talend Studio User Guide.
In the Default table name field, enter the name of the input table, customer, to be processed by this component.
Double-click the other tELTHiveInput component using the state_city link to open its Component view.
Click the [...] button next to Edit schema to open the schema editor.
Click the button as many times as required to add columns and rename them to replicate the schema of the state_city table we created earlier in Hive.
In the Default table name field, enter the name of the input table, state_city, to be processed by this component.
Configuring the connection to Hive
Click tELTHiveMap, then, click Component to open its Component view.
In the Version area, select the Hadoop distribution you are using and the Hive version.
In the Connection mode list, select the connection mode you want to use. If your distribution is HortonWorks, this mode is Embedded only.
In the Host field and the Port field, enter the authentication information for the component to connect to Hive. For example, the host is talend-hdp-all and the port is 9083.
Select the Set Jobtracker URI check box and enter the location of the Jobtracker. For example, talend-hdp-all:50300.
Select the Set NameNode URI check box and enter the location of the NameNode. For example, hdfs://talend-hdp-all:8020.
Mapping the schemas
Click ELT Hive Map Editor to map the schemas
On the input side (left in the figure), click the Add alias button to add the table to be used.
In the pop-up window, select the customer table, then click OK.
Repeat the operations to select the state_city table.
Drag and drop the idstate column from the customer table onto the id column of the state_city table. Thus an inner join is created automatically.
On the output side (the right side in the figure), the agg_result table is empty at first. Click at the bottom of this side to add as many columns as required and rename them to replicate the schema of the agg_result table you created earlier in Hive.
The type column is the partition column of the agg_result table and should not be replicated in this schema. For further information about the partition column of the Hive table, see the Hive manual.
From the customer table, drop id, name, address, and sum1 to the corresponding columns in the agg_result table.
From the state_city table, drop postal, state, capital and mostpopulouscity to the corresponding columns in the agg_result table.
Click OK to validate these changes.
Double-click tELTHiveOutput to open its Component view.
If this component does not have the same schema of the preceding component, a warning icon appears. In this case, click the Sync columns button to retrieve the schema from the preceding one and once done, the warning icon disappears.
In the Default table name field, enter the output table you want to write data in. In this example, it is agg_result.
In the Field partition table, click to add one row. This allows you to write data in the partition column of the agg_result table.
This partition column was defined the moment we created the agg_result table using
partitioned by (type string)in the Create statement presented earlier. This partition column is type, which describes the type of a customer.
In Partition column, enter type without any quotation marks and in Partition value, enter prospective in single quotation marks.
Press F6 to run this Job.
Once done, verify agg_result in Hive using, for example,
select * from agg_result;
This figure present only a part of the table. You can find that the selected input columns are aggregated and written into the agg_result table and the partition column is filled with the value prospective.