Skip to main content Skip to complementary content
Close announcements banner

Linking the components

Procedure

  1. In the Integration perspective of Talend Studio, create an empty Spark Batch Job from the Job Designs node in the Repository tree view.
    For further information about how to create a Spark Batch Job, see Creating Spark Batch Jobs.
  2. In the workspace, enter the name of the component to be used and select this component from the list that appears. In this scenario, the components are tHDFSConfiguration, two tFixedFlowInput components (label one to customer_base and the other to web_data), tSqlRow, tCacheOut, tCacheIn, tMap, tExtractDelimitedFields, tAggregateRow, tTop and tLogRow.
    The tFixedFlowInput components are used to load the sample data into the data flow. In the real-world practice, you can use other components such as tMysqlInput, alone or even with a tMap, in the place of tFixedFlowInput to design a sophisticated process to prepare your data to be processed.
  3. Connect customer_base (tFixedFlowInput), tSqlRow and tCacheOut using the Row > Main link. In this subJob, the records about the Silver-level customers are selected and stored in cache.
  4. Connect web_data (tFixedFlowInput) to tMap using the Row > Main link. This is the main input flow to the tMap component.
  5. Do the same to connect tCacheIn to tMap. This is the lookup flow to tMap.
  6. Connect tMap to tExtractDelimitedFields using the Row > Main link and name this connection in the dialog box that is displayed. For example, name it to output.
  7. Connect tExtractDelimitedFields, tAggregateRow, tTop and tLogRow using the Row > Main link.
  8. Connect customer_base to web_data using the Trigger > OnSubjobOk link.
  9. Leave the tHDFSConfiguration component alone without any connection.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – let us know how we can improve!