I found the same problem.
I think the code of host_manange.py/get_all_host_states need to change like this.
########################################
for service in volume_services:
if not utils.service_is_up(service) or service['disabled']: LOG.warn(_("service is down or disabled. Host name :%s") % service['host']) _host = self.host_state_map.get(service['host'])
if _host: self.host_state_map.pop(service['host']) continue
host = service['host']
########################################
The reason is that get_all_host_states method is use to get all service state and cinder scheduler choose the cinder-volume service from them.
Then when cinder-volume service is down or disabled, the state of bad one need remove from host_state_map list.
I found the same problem. py/get_ all_host_ states need to change like this.
I think the code of host_manange.
####### ####### ####### ####### ####### ##### is_up(service) or service[ 'disabled' ]:
LOG.warn( _("service is down or disabled. Host name :%s")
% service['host'])
_host = self.host_ state_map. get(service[ 'host'] )
self. host_state_ map.pop( service[ 'host'] )
continue ####### ####### ####### ####### #####
for service in volume_services:
if not utils.service_
if _host:
host = service['host']
#######
The reason is that get_all_host_states method is use to get all service state and cinder scheduler choose the cinder-volume service from them.
Then when cinder-volume service is down or disabled, the state of bad one need remove from host_state_map list.