In the Repository tree view of the Integration perspective, right-click the Job Designs node and select Create Big Data Batch Job from the contextual menu.
The [New Big Data Batch Job] wizard opens to help you define the main properties of the new Job.
Fill the Job properties as shown in the previous screenshot.
The fields correspond to the following properties:
the name of the new Job.
Note that a message comes up if you enter prohibited characters.
Select the computation framework you need to use to create the Job. This framework can be, for example, MapReduce or Spark.
Job purpose or any useful information regarding the Job use.
Job description containing any information that helps you describe what the Job does and how it does it.
a read-only field that shows by default the current user login.
a read-only field that shows by default the login of the user who owns the lock on the current Job. This field is empty when you are creating a Job and has data only when you are editing the properties of an existing Job.
a read-only field. You can manually increment the version using the M and m buttons.
a list to select from the status of the Job you are creating.
a list to select from the folder in which the Job will be created.
Once you click Finish, an empty design workspace opens up showing the name of the MapReduce Job as a tab label.
You can also create a MapReduce Job based on a Standard Job by converting this Standard Job to the MapReduce framework. For further information about how to convert a Job between different frameworks, see your Talend Studio User Guide.