Skip to main content Skip to complementary content

Resolve the hdp.version variable issue for MapReduce Jobs


  1. Define the hdp.version parameter in your cluster, to be exact, in the mapred-site.xml file of the cluster. The Hortonworks cluster reads this file to find the MapReduce application to be used.
    1. In Ambari, click the MapReduce2 service on the service list on the left, then click Configs to open the configuration page and click the Advanced tab.
    2. Scroll down the page to find the Advanced mapred-site list at the end of page and click Advanced mapred-site to show this list.
    3. Find the mapreduce.application.framework.path parameter. A path has been defined for this parameter with the ${hdp.version} variable present in this path.
    4. Replace ${hdp.version} with hdp.version=, the version number you found by following the procedure at the beginning of this article about how to find the hdp.version value to be used.
    5. Click Save to validate the new configuration and restart the services to implement the new hdp.version value in the mapred-site.xml file.
  2. In the Studio, open the MapReduce Job to be used and click the Run tab to open its view.
  3. Click Advanced settings, then in the view, select the Use specific JVM arguments check box and add the same version number you have entered in the cluster. In this example, add -Dhdp.version=

    This procedure explains only the actions to be performed to solve the HDP version issue for a MapReduce Job. You need properly configure the other parts of your Job before being able to run it successfully.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – let us know how we can improve!