Creating a MapReduce Job

Talend Data Fabric Getting Started Guide

EnrichVersion
6.1
EnrichProdName
Talend Data Fabric
task
Data Governance
Design and Development
Data Quality and Preparation
EnrichPlatform
Talend Studio
  1. In the Repository tree view of the Integration perspective, right-click the Job Designs node and select Create Big Data Batch Job from the contextual menu.

    The [New Big Data Batch Job] wizard opens to help you define the main properties of the new Job.

  2. Fill the Job properties as shown in the previous screenshot.

    The fields correspond to the following properties:

    Field

    Description

    Name

    the name of the new Job.

    Note that a message comes up if you enter prohibited characters.

    Framework

    Select the computation framework you need to use to create the Job. This framework can be, for example, MapReduce or Spark.

    Purpose

    Job purpose or any useful information regarding the Job use.

    Description

    Job description containing any information that helps you describe what the Job does and how it does it.

    Author

    a read-only field that shows by default the current user login.

    Locker

    a read-only field that shows by default the login of the user who owns the lock on the current Job. This field is empty when you are creating a Job and has data only when you are editing the properties of an existing Job.

    Version

    a read-only field. You can manually increment the version using the M and m buttons.

    Status

    a list to select from the status of the Job you are creating.

    Path

    a list to select from the folder in which the Job will be created.

  3. Once you click Finish, an empty design workspace opens up showing the name of the MapReduce Job as a tab label.

You can also create a MapReduce Job based on a Standard Job by converting this Standard Job to the MapReduce framework. For further information about how to convert a Job between different frameworks, see your Talend Studio User Guide.