Comment 2 for bug 1265813

Revision history for this message
Andrew Lazarev (alazarev) wrote :

The bug was covered by https://review.openstack.org/#/c/66322

Cluster still ends in Error state if user doesn't include port where Hadoop expects it. But this is more general problem - hadoop configs validation. So, if user doesn't include port in dfs.http.address (like Matt mentioned) cluster fails to start because namenode service didn't start. The same effect will be if user enters wrong value as dfs.http.address but with port included.

So, Sahara issue is fixed. Hadoop config validation is much more generic problem than just port validation.

Closing as resolved.