disk_available_least value updates when instance moved but not to the value expected

Bug #1834527 reported by Wendy Mitchell
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Expired
Undecided
Unassigned

Bug Description

Brief Description
-----------------
Originating host updates disk_available_least value whenthe instance migrates off of it (in a resize operation)
But the value is not what was expected.

nova/test_resize_vm.py::TestResizeDiffHost::test_resize_different_comp_node[local_image]

Severity
--------
standard

Steps to Reproduce
------------------

1. Create a flavor flavor-3 with disk size 7
(7f392fd5-bb05-4c4d-bf03-7da019c5bb64)
--vcpus=1 --ephemeral=0 --swap=0 --ram=1024 --disk=7 flavor-3 hw:mem_page_size': '2048
Boot instance using flavor-3
Instance a61b7899-10ad-4121-87e5-d56674f4dfa3 is created and lands on compute-2

see nova compute.log

{"log":"2019-06-27 17:33:59.078 2813101 INFO nova.compute.claims [req-3da9ec9b-56aa-426a-8f55-312dfa25c85a ccc4888ba22547f4a8811237d1c038fc 5a6966f44a474d64ace047ff1d196a20 - default default] [instance: a61b7899-10ad-4121-87e5-d56674f4dfa3] Attempting claim on node compute-2: memory 1024 MB, disk 7 GB, vcpus 1 CPU\n","stream":"stdout","time":"2019-06-27T17:33:59.079843317Z"}

3. Create another flavor-4 to occupy vcpus that remain on the same host (flavor-4 53ae6df0-654d-4e3c-82f4-d08ef2ac9a40)
--vcpus=21 --ram=1024 --disk=2 flavor-4 hw:mem_page_size': '2048'

Boot the instance with this flavor to occupy the remaining vcpus on the same host (compute-2 in this case)

see nova-compute.log

{"log":"2019-06-27 17:34:47.111 2813101 INFO nova.compute.claims [req-c0b7716e-11bc-4c7e-bf28-d0a512302524 ccc4888ba22547f4a8811237d1c038fc 5a6966f44a474d64ace047ff1d196a20 - default default] [instance: a8a1b43e-127f-4e47-9476-64cdeaf2ca33] Attempting claim on node compute-2: memory 1024 MB, disk 2 GB, vcpus 21 CPU\n","stream":"stdout","time":"2019-06-27T17:34:47.112205534Z"}

@ [2019-06-27 17:34:44,124]
--poll --flavor=53ae6df0-654d-4e3c-82f4-d08ef2ac9a40 --key-name=keypair-tenant1 --availability-zone=nova:compute-2 --image=bc3f6915-22c4-44c7-a28b-d5fdea648198 --nic net-id=30010e24-2c88-483a-a137-3d4d2c6909e2 --nic net-id=01cfa21b-c5bd-4e35-aaf0-27665de58d2a tenant1-vm-2

5. Check disk_available_least on compute-2 prior to the resize of the first instance that was created in step 1
openstack hypervisor show on compute-2

disk_available_least value is 339 at this point @ [2019-06-27 17:35:17,282]

6. Create another flavor (a resize target flavor)
flavor-5 (6f682e05-ac28-4450-8b89-3671d0166b19)
--vcpus=2 --ephemeral=0 --swap=0 --ram=1024 --disk=7 flavor-5

Resize the instance to flavor-5 and confirm the instance is moved to a different host (resized off of compute-2)
Resizing VM a61b7899-10ad-4121-87e5-d56674f4dfa3 to flavor 6f682e05-ac28-4450-8b89-3671d0166b19

The instance instance-00000011 is now running on compute-3 with flavor-5 (with disk size 7)
                                             |
| OS-EXT-SRV-ATTR:hypervisor_hostname | compute-3 |
| OS-EXT-SRV-ATTR:instance_name | instance-00000011

7. Run openstack hypervisor show on compute-2 to confirm the disk_available_least value is returning down to expected value.
$ openstack hypervisor show compute-2
disk_available_least value now appears to be 344

| disk_available_least | 344

Expected Behavior
------------------
Expected disk_available_leasst to update to ~346 (on the originating host compute-2)

339 (compute-2 disk_available_least value before resize operation)
+ 7 (disk size of instance that moved away)
= 346 (give or take 1)

Actual Behavior
----------------
---This testcase failed as disk_available_least on compute-2 is now 344
ie. the actual disk_available actually updated on the VM originating host to 344 ie. change of difference of 5 (as opposed to 7)

339 (compute-2 disk_available_least value before resize operation)
+ 5 (disk size of instance that moved away)
=344

see nfv-vim.log

2019-06-27T17:35:59.834 controller-0 VIM_Thread[3129659] INFO _instance_state_resize_confirm.py.35 Exiting state (resize-confirm) for tenant1-vm-1.
...

2019-06-27T17:36:00.134 controller-0 VIM_Thread[3129659] DEBUG _vim_nfvi_events.py.288 Instance action-change, uuid=a61b7899-10ad-4121-87e5-d56674f4dfa3, nfvi_action=resize, nfvi_action_state=started, reason=None.
...
2019-06-27T17:36:02.647 controller-0 VIM_Thread[3129659] INFO _instance_director.py.1601 Instance tenant1-vm-1 has recovered on host compute-3.

compute.log on compute-3

"log":"2019-06-27 17:35:34.056 3011232 INFO nova.compute.claims [req-92ca6582-555c-4091-b3ee-fc13189c7dca 85d9be7b82104a9eb54f30de84be7d68 bef1e6b02a524ae9a6bc50ea21a2b355 - default default] [instance: a61b7899-10ad-4121-87e5-d56674f4dfa3] Attempting claim on node compute-3: memory 1024 MB, disk 7 GB, vcpus 2 CPU\n","stream":"stdout","time":"2019-06-27T17:35:34.057401328Z"}
...

{"log":"2019-06-27 17:35:34.064 3011232 INFO nova.compute.claims [req-92ca6582-555c-4091-b3ee-fc13189c7dca 85d9be7b82104a9eb54f30de84be7d68 bef1e6b02a524ae9a6bc50ea21a2b355 - default default] [instance: a61b7899-10ad-4121-87e5-d56674f4dfa3] Claim successful on node compute-3\n","stream":"stdout","time":"2019-06-27T17:35:34.064449551Z"}

Reproducibility
---------------
yes

System Configuration
--------------------
storage
(Lab: PV-0 nova/test_resize_vm.py::TestResizeDiffHost::test_resize_different_comp_node[local_image]

Branch/Pull Time/Commit
--------------------
Load: 20190620T013000Z
Job: STX_build_master_master

Matt Riedemann (mriedem)
tags: added: resize
Revision history for this message
Matt Riedemann (mriedem) wrote :

Potential duplicate bugs:

- bug 1517442

- bug 1577642 (already from windriver)

Also, there is a periodic task in the compute service (update_available_resource) which by default runs every ~60 seconds. If you waited a minute or so in your test did you notice that the disk_available_least was changed to the expected value?

tags: added: resource-tracker
Revision history for this message
Wendy Mitchell (wmitchellwr) wrote :

The value did not later change.

tags: added: stx.regression
Revision history for this message
Balazs Gibizer (balazs-gibizer) wrote :

I've failed to reproduce it on nova master with a multi node devstack.

First I tried the simple case:
* boot a VM with disk=5 -> disk_available_least decreased
* resize the VM with disk=10, the VM is moved to another host -> disk_available_least went back to its initial value on the source host of the resize.

Then I tried with two VMs
* boot VM1 with disk-5
* boot VM2 with disk-5
* resize VM1 to disk-10

Still the disk_available_least is updated according the actual usage.

What is the openstack nova version you are using?

Changed in nova:
status: New → Incomplete
Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for OpenStack Compute (nova) because there has been no activity for 60 days.]

Changed in nova:
status: Incomplete → Expired
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.