The instance is volume backed and power state is PAUSED,shelve the instance failed
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
| OpenStack Compute (nova) |
Undecided
|
Qiu Fossen |
Bug Description
The instance is volume backed and power state is PAUSED,shelve the instance failed. The reason is that can not attempt a clean shutdown of a paused guest instance some hypervisors will fail the clean shutdown if the guest is not running.
Description
===========
The instance is booted from volume, and pause the instance in paused status, and
shelve this instance. But this instance's status is only paused not shelved.
Steps to reproduce
==================
1.Boot an instance from volume.
2.Pause the instance.
3.shelve the instance.
Expected result
===============
This instance's status is shelved.
Actual result
=============
This instance's status is paused.
Environment
===========
1. OpenStack Rocky version
2.Hypervisor is kvm
3.The storage is Ceph
4.Networking is Neutron with OpenVSwitch
Logs&Configs
============
The logs as following:
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
ERROR oslo_messaging.
Changed in tacker: | |
assignee: | nobody → Qiu Fossen (fossen123) |
affects: | tacker → nova |
Brin Zhang (zhangbailin) wrote : | #1 |
Changed in nova: | |
status: | New → Confirmed |
status: | Confirmed → In Progress |
Lee Yarwood (lyarwood) wrote : | #2 |
Can you edit your initial comment and use the bug template listed below (and on the bug creation page):
!!!!!!!
Each bug report needs to provide a minimum of information without
we are not able to address the issue you observed. It is crucial
for other developers to have this information. You can use the
template below, which asks for this information.
"Request for Feature Enhancements" (RFEs) are collected:
* with the blueprint process [1][2][3] (if you're a developer) OR
* via the liason with the operator (ops) community [4].
This means "wishlist" bugs won't get any attention anymore.
You can ask in the #openstack-nova IRC channel on freenode, if you have questions about this.
References:
[1] https:/
[2] https:/
[3] https:/
[4] http://
!!!!!!!
Description
===========
Some prose which explains more in detail what this bug report is
about. If the headline of this report is descriptive enough, skip
this section.
Steps to reproduce
==================
A chronological list of steps which will bring off the
issue you noticed:
* I did X
* then I did Y
* then I did Z
A list of openstack client commands (with correct argument value)
would be the most descriptive example. To get more information use:
$ nova --debug <command> <arg1> <arg2=value>
or
$ openstack --debug <command> <arg1> <arg2=value>
Expected result
===============
After the execution of the steps above, what should have
happened if the issue wasn't present?
Actual result
=============
What happened instead of the expected result?
How did the issue look like?
Environment
===========
1. Exact version of OpenStack you are running. See the following
list for all releases: http://
If this is from a distro please provide
$ dpkg -l | grep nova
or
$ rpm -ql | grep nova
If this is from git, please provide
$ git log -1
2. Which hypervisor did you use?
(For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
What's the version of that?
2. Which storage type did you use?
(For example: Ceph, LVM, GPFS, ...)
What's the version of that?
3. Which networking type did you use?
(For example: nova-network, Neutron with OpenVSwitch, ...)
Logs & Configs
==============
The tool *sosreport* has support for some OpenStack projects.
It's worth having a look at it. For example, if you want to collect
the logs of a compute node you would execute:
$ sudo sosreport -o openstack_nova --batch
on that compute node. Attach the logs to this bug report. Please
consider that these logs need to be collected in "DEBUG" mode.
For tips on reporting VMware virt driver bugs, please see this doc: https:/
Lee Yarwood (lyarwood) wrote : | #3 |
FWIW I don't think there is a bug with shelve offloading paused instances, at least for Libvirt.
We have the following tempest test for this at present:
Which hypervisors are failing here?
Brin Zhang (zhangbailin) wrote : | #4 |
This hit in Rocky release, the hypervisor is Libvirt+KVM, yes, that should give more details for this bug, will respin this bug, and paste more error log.
description: | updated |
Sylvain Bauza (sylvain-bauza) wrote : | #5 |
I'll close this bug for keeping our bug tracking correct. Feel free to mark this bug as duplicate if you created another bug.
Changed in nova: | |
status: | In Progress → Invalid |
Qiu Fossen (fossen123) wrote : | #6 |
why ? This is not a bug ? ?
Changed in nova: | |
status: | Invalid → In Progress |
sunhao (suha9102) wrote : | #7 |
I also encountered the same problem. Due to asynchronous reasons, although a successful message was returned after the shelved operation, an exception has actually occurred, and the status of the instance is still paused. I also think this is a bug.
There is the same behavior if the server boot from an image https:/ /opendev. org/openstack/ nova/src/ branch/ master/ nova/compute/ manager. py#L6325- L6326