resize fails with: "Unable to resize disk down" for ephemeral_gb

Bug #1755392 reported by wanghongtao
26
This bug affects 5 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Opinion
Wishlist
Unassigned

Bug Description

env:openstack master branch

step:
1.admin user create flavor
Flavor Name:test1 VCPUs:1 RAM:256MB Root Disk:5G Ephemeral Disk:5G Swap Disk:5MB

2.admin user create instance use flavor test1.

3.admin user create a new flavor
Flavor Name:test2 VCPUs:1 RAM:256MB Root Disk:5G Ephemeral Disk:4G Swap Disk:5MB
(turn down Ephemeral Disk)

4.resize the instance choose flavor test2

problem:
1.resize the instance if ephemeral_gb turn down the nova-compute raise exception
2.flavor no chnage but horizon no warning

Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server [None req-2b01ccc1-f5b1-4a7e-8c07-6cf1f06a56e5 admin admin] Exception during message handling: ResizeError: Resize error: Unable to resize disk down.
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 163, in _process_incoming
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 220, in dispatch
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args)
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 190, in _do_dispatch
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args)
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/exception_wrapper.py", line 76, in wrapped
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server function_name, call_dict, binary)
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server self.force_reraise()
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/exception_wrapper.py", line 67, in wrapped
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw)
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/compute/manager.py", line 186, in decorated_function
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server "Error: %s", e, instance=instance)
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server self.force_reraise()
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/compute/manager.py", line 156, in decorated_function
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs)
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/compute/utils.py", line 970, in decorated_function
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs)
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/compute/manager.py", line 214, in decorated_function
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server kwargs['instance'], e, sys.exc_info())
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server self.force_reraise()
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/compute/manager.py", line 202, in decorated_function
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs)
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/compute/manager.py", line 4288, in resize_instance
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server migration, image, disk_info, migration.dest_compute)
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server File "/usr/lib64/python2.7/contextlib.py", line 35, in __exit__
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server self.gen.throw(type, value, traceback)
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server File "/opt/stack/nova/nova/compute/manager.py", line 7429, in _error_out_instance_on_exception
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server raise error.inner_exception
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server ResizeError: Resize error: Unable to resize disk down.
Mar 13 02:11:21 localhost.localdomain nova-compute[7792]: ERROR oslo_messaging.rpc.server

Analysis
1.nova-compute no judge ephemeral_gb turn down
2.in nova-api judge if ephemeral_gb turn down raise exception

Tags: resize
Revision history for this message
Matt Riedemann (mriedem) wrote :

There is no error in Horizon because the compute API isn't what's raising the validation error, the nova-compute service is. Horizon talks to nova-api which then RPC casts to the nova-conductor service which eventually casts to nova-compute to do the migration (resize), and the compute (libvirt driver in this case) is doing some additional validation.

The resize operation will fail and there will be an instance action event and likely a fault recorded with the failure, which you can see in Horizon to see the resize failed.

What are you expecting to change here? That the compute API will do the validation that the compute is doing and fail fast in the API so the user gets a 400 response rather than a 202 response?

tags: added: resize
Revision history for this message
Matt Riedemann (mriedem) wrote :

There is a somewhat related check in the API, but it's for the root disk and volume-backed instances:

https://github.com/openstack/nova/blob/2ec8c49f6cb4a0e7dba217e824c20d9c703d2105/nova/compute/api.py#L3319-L3324

summary: - resize the instance is fail
+ resize the instance is fail: "Unable to resize disk down" for
+ ephemeral_gb
summary: - resize the instance is fail: "Unable to resize disk down" for
- ephemeral_gb
+ resize fails with: "Unable to resize disk down" for ephemeral_gb
Revision history for this message
wanghongtao (hongtao.wang) wrote :

@Matt Riedemann
1.Now nova master branch cannot see resize failed when ephemeral_gb turn down.
2.I suggest in nova api do validation give a warn for user

Revision history for this message
wanghongtao (hongtao.wang) wrote :

@Matt Riedemann
Only check root_gb when ephemeral_gb turn down nova compute still warn "Resize error: Unable to resize disk down"

https://github.com/openstack/nova/blob/2ec8c49f6cb4a0e7dba217e824c20d9c703d2105/nova/compute/api.py#L3319-L3324

Revision history for this message
melanie witt (melwitt) wrote :

This looks like a request for enhancement to pre-validate in the API whether an ephemeral_gb resize down will fail later on in nova-compute to fail faster for users. I think the challenge here is going to be that resize down is allowed for shared storage, but not for local storage, and I don't know if we have a way to check "if shared storage" in the API.

Changed in nova:
importance: Undecided → Wishlist
status: New → Opinion
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.