Skip to main content Skip to complementary content

Linking the components

Procedure

  1. In the Integration perspective of the Studio, create an empty Spark Batch Job from the Job Designs node in the Repository tree view.
    For further information about how to create a Spark Streaming Job, see Talend Big Data Getting Started Guide .
  2. In the workspace, enter the name of the component to be used and select this component from the list that appears. In this scenario, the components are tHDFSConfiguration, tMongoDBConfiguration, tFixedFlowInput, tMongoDBOutput, tMongoDBLookupInput, tMap and tLogRow.
    The tFixedFlowInput components are used to load the data about movies into the data flow. In the real-world practice, you can use other components such as tFileInputDelimited instead to design a sophisticated process to prepare your data to be processed.
  3. Connect tFixedFlowInput to tMap using the Row > Main link.
    This way, the main flow to tMap is created. The movie information is sent via this flow.
  4. Connect tMongoDBLookupInput to tMap using the Row > Main link.
    This way, the lookup flow to tMap is created. The movie director information is sent via this flow.
  5. Connect tMap to tMongoDBOutput using the Row > Main link and name this connection in the dialog box that is displayed. For example, name it to out1.
  6. Do the same to connect tMap to tLogRow and name this connection to reject.
  7. Leave tHDFSConfiguration and tMongoDBConfiguration alone without any connection.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – let us know how we can improve!