Comment 4 for bug 2034937

Revision history for this message
Jay Jahns (jjahns) wrote :

Hi - the steps I conducted for this are as follows. All of the actions are done through the UI, however the CLI commands to execute are identical in this regard.

* Create a volume from an image using volume type without pool name (uses the defaults from the backend config)

* Mark volume bootable

* Attach volume to new instance

* Conduct a live migration of instance to another host

Observation:

All of the volume creation, attachment steps complete without issue. Logs indicate that pool name was not specified, so the default is set to Diamond/NONE

Once a live migration occurs, the trace as mentioned is generated. It appears that when we are trying to map the volume to the other compute node, a masking view needs to be created and it relies solely on the existence of extra_specs['pool_name']. Since that does not exist, the live migration fails.

Further analysis indicates that when we set the extra specs, we check if pool_name was there. If it was not, we establish the service level and workload from the conf file; but we do not create a pool_name at this point.

Artificially setting a pool_name using the defaults will prevent this behavior from occurring because it now exists.