quota_usages in_use value change incorrectly when delete a resizing instance

Bug #1099729 reported by wangpan
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Fix Released
Undecided
wangpan

Bug Description

reproduce steps in devstack:
1. create an instance with flavor m1.small using demo user
2. resize it from m1.small to m1.tiny
3. when the task_state of instance is 'resize_finish', delete it by demo user
4. query the nova db like below:
mysql> select * from quota_usages ;
+---------------------+---------------------+------------+---------+----+----------------------------------+-----------+--------+----------+---------------+
| created_at | updated_at | deleted_at | deleted | id | project_id | resource | in_use | reserved | until_refresh |
+---------------------+---------------------+------------+---------+----+----------------------------------+-----------+--------+----------+---------------+
| 2013-01-08 04:43:21 | 2013-01-15 07:32:20 | NULL | 0 | 1 | c0fb216d13cc46ca9abe7261ab9ab0d5 | instances | 0 | 0 | NULL |
| 2013-01-08 04:43:21 | 2013-01-15 07:32:20 | NULL | 0 | 2 | c0fb216d13cc46ca9abe7261ab9ab0d5 | ram | 1536 | 0 | NULL |
| 2013-01-08 04:43:21 | 2013-01-15 07:32:20 | NULL | 0 | 3 | c0fb216d13cc46ca9abe7261ab9ab0d5 | cores | 0 | 0 | NULL |
+---------------------+---------------------+------------+---------+----+----------------------------------+-----------+--------+----------+---------------+

this is a race condition issue, so it is probabilistic to reproduce it, but you can add a time.sleep(10) to nova/compute/manager.py:_finish_resize(), just like this:
 LOG.debug(_("-----------------------sleep 10 start-------------------------"))
 time.sleep(10)
 self.network_api.setup_networks_on_host(context, instance,
                                                migration['dest_compute'])
when you see the debug log during resize, you delete the instance, and this issue will occur almost every time.

the reason is that, when we delete an instance, we use instance['memory_mb'], instance['vcpus'] to create quota's reservations, if the instance is resizing but not 'finished', the in_use value in quota_usages is NOT commited(2048M), but the instance['memory_mb'], instance['vcpus'] is updated(512M), so the deleting operation will commit a wrong reservations(-512M), and the in_use value changes to 2048-512=1536.

Tags: quotas
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (master)

Fix proposed to branch: master
Review: https://review.openstack.org/19772

Changed in nova:
assignee: nobody → wangpan (hzwangpan)
status: New → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (master)

Reviewed: https://review.openstack.org/19772
Committed: http://github.com/openstack/nova/commit/796716d4cc51f35a734aa1f9d70ea567c37b5a79
Submitter: Jenkins
Branch: master

commit 796716d4cc51f35a734aa1f9d70ea567c37b5a79
Author: Wangpan <email address hidden>
Date: Wed Jan 16 10:00:47 2013 +0800

    Fix wrong quota reservation when deleting resizing instance

    When we delete an instance which is resizing from m1.small to m1.tiny, we use
    instance['memory_mb'] to create quota's ram reservation, if the resizing is not
    'finished', the in_use value in quota_usages is NOT commited(2048M),
    but the instance['memory_mb'] is updated(512M) in _finish_resize(), so the
    deleting operation after updating will commit a wrong ram reservation(-512M),
    and the in_use value changes to 2048-512=1536.
    The cores reservation encounters the same problem.

    Fixes bug #1099729

    Change-Id: Ide54fc9d822041f535819c496822aeac5a8dc68d

Changed in nova:
status: In Progress → Fix Committed
Thierry Carrez (ttx)
Changed in nova:
milestone: none → grizzly-3
status: Fix Committed → Fix Released
Thierry Carrez (ttx)
Changed in nova:
milestone: grizzly-3 → 2013.1
Joe Gordon (jogo)
tags: added: quotas
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.