Quota usage value error in batch delete

Bug #1707379 reported by zhongjun on 2017-07-29
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Manila
High
zhongjun

Bug Description

Steps to reproduce:

We have 4 share-api service in each 4 vm

# manila create NFS 1 #create a new share
# manila quota-show --detail #The number of shares is equal to 4, the gigabytes is equal to 4
+-----------------------+----------------------------------+
| Property | Value |
+-----------------------+----------------------------------+
| share_groups | in_use = 0 |
| | limit = 50 |
| | reserved = 0 |
| gigabytes | in_use = 4 |
| | limit = 1000 |
| | reserved = 0 |
| snapshot_gigabytes | in_use = 0 |
| | limit = 1000 |
| | reserved = 0 |
| shares | in_use = 4 |
| | limit = 50 |
| | reserved = 0 |

#manila delete share_id # Delete the same share at the same time in each 4 vm
# manila quota-show --detail # The number of shares is equal to 0, the gigabytes is equal to 0
+-----------------------+----------------------------------+
| Property | Value |
+-----------------------+----------------------------------+
| share_groups | in_use = 0 |
| | limit = 50 |
| | reserved = 0 |
| gigabytes | in_use = 0 |
| | limit = 1000 |
| | reserved = 0 |
| snapshot_gigabytes | in_use = 0 |
| | limit = 1000 |
| | reserved = 0 |
| shares | in_use = 0 |
| | limit = 50 |
| | reserved = 0 |

We only delete one share, but the number of shares changed from 4 to 0, the gigabytes also changed from 4 to 0.

2017-07-31 09:16:25.168 ERROR oslo_messaging.rpc.server [req-b0939a08-b757-4d80-a0d3-6a4e9fdf604e d450d728853a452d9f20b8ff98b9f279 e23850eeb91d4fa3866af634223e454c] Exception during message handling
2017-07-31 09:16:25.168 TRACE oslo_messaging.rpc.server Traceback (most recent call last):
2017-07-31 09:16:25.168 TRACE oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 157, in _process_incoming
2017-07-31 09:16:25.168 TRACE oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
2017-07-31 09:16:25.168 TRACE oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 213, in dispatch
2017-07-31 09:16:25.168 TRACE oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args)
2017-07-31 09:16:25.168 TRACE oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 183, in _do_dispatch
2017-07-31 09:16:25.168 TRACE oslo_messaging.rpc.server result = func(ctxt, **new_args)
2017-07-31 09:16:25.168 TRACE oslo_messaging.rpc.server File "/opt/stack/manila/manila/share/manager.py", line 186, in wrapped
2017-07-31 09:16:25.168 TRACE oslo_messaging.rpc.server return f(self, *args, **kwargs)
2017-07-31 09:16:25.168 TRACE oslo_messaging.rpc.server File "/opt/stack/manila/manila/utils.py", line 560, in wrapper
2017-07-31 09:16:25.168 TRACE oslo_messaging.rpc.server return func(self, *args, **kwargs)
2017-07-31 09:16:25.168 TRACE oslo_messaging.rpc.server File "/opt/stack/manila/manila/share/manager.py", line 2753, in delete_share_instance
2017-07-31 09:16:25.168 TRACE oslo_messaging.rpc.server self.db.share_instance_delete(context, share_instance_id)
2017-07-31 09:16:25.168 TRACE oslo_messaging.rpc.server File "/opt/stack/manila/manila/db/api.py", line 316, in share_instance_delete
2017-07-31 09:16:25.168 TRACE oslo_messaging.rpc.server return IMPL.share_instance_delete(context, instance_id)
2017-07-31 09:16:25.168 TRACE oslo_messaging.rpc.server File "/opt/stack/manila/manila/db/sqlalchemy/api.py", line 165, in wrapper
2017-07-31 09:16:25.168 TRACE oslo_messaging.rpc.server return f(*args, **kwargs)
2017-07-31 09:16:25.168 TRACE oslo_messaging.rpc.server File "/opt/stack/manila/manila/db/sqlalchemy/api.py", line 1426, in share_instance_delete
2017-07-31 09:16:25.168 TRACE oslo_messaging.rpc.server session=session)
2017-07-31 09:16:25.168 TRACE oslo_messaging.rpc.server File "/opt/stack/manila/manila/db/sqlalchemy/api.py", line 165, in wrapper
2017-07-31 09:16:25.168 TRACE oslo_messaging.rpc.server return f(*args, **kwargs)
2017-07-31 09:16:25.168 TRACE oslo_messaging.rpc.server File "/opt/stack/manila/manila/db/sqlalchemy/api.py", line 1377, in share_instance_get
2017-07-31 09:16:25.168 TRACE oslo_messaging.rpc.server raise exception.NotFound()
2017-07-31 09:16:25.168 TRACE oslo_messaging.rpc.server NotFound: Resource could not be found.
2017-07-31 09:16:25.168 TRACE oslo_messaging.rpc.server
2017-07-31 09:16:25.299 ERROR oslo_messaging.rpc.server [req-7c8d3233-6be9-4ea4-ac88-c9d8bf102c95 d450d728853a452d9f20b8ff98b9f279 e23850eeb91d4fa3866af634223e454c] Exception during message handling
2017-07-31 09:16:25.299 TRACE oslo_messaging.rpc.server Traceback (most recent call last):
2017-07-31 09:16:25.299 TRACE oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 157, in _process_incoming
2017-07-31 09:16:25.299 TRACE oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
2017-07-31 09:16:25.299 TRACE oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 213, in dispatch
2017-07-31 09:16:25.299 TRACE oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args)
2017-07-31 09:16:25.299 TRACE oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 183, in _do_dispatch
2017-07-31 09:16:25.299 TRACE oslo_messaging.rpc.server result = func(ctxt, **new_args)
2017-07-31 09:16:25.299 TRACE oslo_messaging.rpc.server File "/opt/stack/manila/manila/share/manager.py", line 186, in wrapped
2017-07-31 09:16:25.299 TRACE oslo_messaging.rpc.server return f(self, *args, **kwargs)
2017-07-31 09:16:25.299 TRACE oslo_messaging.rpc.server File "/opt/stack/manila/manila/utils.py", line 560, in wrapper
2017-07-31 09:16:25.299 TRACE oslo_messaging.rpc.server return func(self, *args, **kwargs)
2017-07-31 09:16:25.299 TRACE oslo_messaging.rpc.server File "/opt/stack/manila/manila/share/manager.py", line 2753, in delete_share_instance
2017-07-31 09:16:25.299 TRACE oslo_messaging.rpc.server self.db.share_instance_delete(context, share_instance_id)
2017-07-31 09:16:25.299 TRACE oslo_messaging.rpc.server File "/opt/stack/manila/manila/db/api.py", line 316, in share_instance_delete
2017-07-31 09:16:25.299 TRACE oslo_messaging.rpc.server return IMPL.share_instance_delete(context, instance_id)
2017-07-31 09:16:25.299 TRACE oslo_messaging.rpc.server File "/opt/stack/manila/manila/db/sqlalchemy/api.py", line 165, in wrapper
2017-07-31 09:16:25.299 TRACE oslo_messaging.rpc.server return f(*args, **kwargs)
2017-07-31 09:16:25.299 TRACE oslo_messaging.rpc.server File "/opt/stack/manila/manila/db/sqlalchemy/api.py", line 1426, in share_instance_delete
2017-07-31 09:16:25.299 TRACE oslo_messaging.rpc.server session=session)
2017-07-31 09:16:25.299 TRACE oslo_messaging.rpc.server File "/opt/stack/manila/manila/db/sqlalchemy/api.py", line 165, in wrapper
2017-07-31 09:16:25.299 TRACE oslo_messaging.rpc.server return f(*args, **kwargs)
2017-07-31 09:16:25.299 TRACE oslo_messaging.rpc.server File "/opt/stack/manila/manila/db/sqlalchemy/api.py", line 1377, in share_instance_get
2017-07-31 09:16:25.299 TRACE oslo_messaging.rpc.server raise exception.NotFound()
2017-07-31 09:16:25.299 TRACE oslo_messaging.rpc.server NotFound: Resource could not be found.
2017-07-31 09:16:25.299 TRACE oslo_messaging.rpc.server

zhongjun (jun-zhongjun) on 2017-07-29
summary: - Quota usage value error
+ Quota usage value error in batch delete
zhongjun (jun-zhongjun) on 2017-07-31
description: updated

Fix proposed to branch: master
Review: https://review.openstack.org/489501

Changed in manila:
assignee: nobody → zhongjun (jun-zhongjun)
status: New → In Progress
Changed in manila:
milestone: none → pike-rc1
importance: Undecided → High
zhongjun (jun-zhongjun) wrote :

We have two or more ways to fix this bug,
1. We could combine the quota record in db if these quota's user_id, project_id and share_type_id etc at the same when we create a share.
2. The quota reserve and commit model has many problems [1]. We could just remove the quota model in the code.

We could discuss it in PTG or another meeting to decide how to resolve this bug.

[1] https://specs.openstack.org/openstack/nova-specs/specs/pike/approved/cells-count-resources-to-check-quota-in-api.html

Changed in manila:
status: In Progress → Confirmed
status: Confirmed → Incomplete
Changed in manila:
milestone: pike-rc1 → queens-1
Changed in manila:
milestone: queens-1 → pike-rc1
Changed in manila:
status: Incomplete → In Progress
Changed in manila:
milestone: pike-rc1 → none
milestone: none → queens-1

Fix proposed to branch: master
Review: https://review.openstack.org/493071

Changed in manila:
assignee: zhongjun (jun-zhongjun) → Valeriy Ponomaryov (vponomaryov)
Changed in manila:
assignee: Valeriy Ponomaryov (vponomaryov) → zhongjun (jun-zhongjun)

Change abandoned by zhongjun (<email address hidden>) on branch: master
Review: https://review.openstack.org/489501
Reason: https://review.openstack.org/#/c/493071/

Reviewed: https://review.openstack.org/493071
Committed: https://git.openstack.org/cgit/openstack/manila/commit/?id=3c596304991927f62290fd940a7f518ec1cae4a2
Submitter: Zuul
Branch: master

commit 3c596304991927f62290fd940a7f518ec1cae4a2
Author: Valeriy Ponomaryov <email address hidden>
Date: Fri Aug 11 19:16:16 2017 +0300

    Fix quota usages update deleting same share from several API endpoints

    It is possible to update quota usages multiple times sending share
    deletion request to several API endpoints concurrently.
    So, move quota usages update logic that is triggered by share deletion,
    to DB functions level, which will be able to be executed only when
    share deletion succeded. So, all concurrent requests, that failed to
    delete DB record, won't commit quota usages updates.

    Change-Id: If7d52e08d00d435f2e26c30654f0d2180b17b81a
    Closes-Bug: #1707379
    Closes-bug: #1707377

Changed in manila:
status: In Progress → Fix Released

This issue was fixed in the openstack/manila 6.0.0.0b3 development milestone.

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers