Create volume failed due to quota exceeded in spite of quota number updated

Bug #1293418 reported by Jerry Cai
26
This bug affects 5 people
Affects Status Importance Assigned to Milestone
Cinder
Invalid
Undecided
Ian Govett

Bug Description

Defect log:
--------------------------------------------------------------------------------------------------------------
  File "/usr/lib/python2.6/site-packages/nose-1.1.2-py2.6.egg/nose/suite.py", line 208, in run
    self.setUp()
  File "/usr/lib/python2.6/site-packages/nose-1.1.2-py2.6.egg/nose/suite.py", line 291, in setUp
    self.setupContext(ancestor)
  File "/usr/lib/python2.6/site-packages/nose-1.1.2-py2.6.egg/nose/suite.py", line 314, in setupContext
    try_run(context, names)
  File "/usr/lib/python2.6/site-packages/nose-1.1.2-py2.6.egg/nose/util.py", line 478, in try_run
    return func()
  File "/opt/stack/tempest/tempest/api/volume/test_volume_metadata.py", line 29, in setUpClass
    cls.volume = cls.create_volume()
  File "/opt/stack/tempest/tempest/api/volume/base.py", line 118, in create_volume
    **kwargs)
  File "/opt/stack/tempest/tempest/services/volume/xml/volumes_client.py", line 152, in create_volume
    resp, body = self.post('volumes', str(common.Document(volume)))
  File "/opt/stack/tempest/tempest/common/rest_client.py", line 201, in post
    return self.request('POST', url, headers, body)
  File "/opt/stack/tempest/tempest/common/rest_client.py", line 376, in request
    resp, resp_body)
  File "/opt/stack/tempest/tempest/common/rest_client.py", line 436, in _error_checker
    raise exceptions.OverLimit(resp_body)
OverLimit: Quota exceeded
Details: {'message': 'VolumeLimitExceeded: Maximum number of volumes allowed (10) exceeded', 'code': '413'}
--------------------------------------------------------------------------------------------------------------

Create volume using tenant: admin
[root@localhost]# keystone tenant-list
+----------------------------------+--------------------+---------+
| id | name | enabled |
+----------------------------------+--------------------+---------+
| 54ce53cb1ed3422fa77d75134acd4c8e | admin | True |

Tried configure 3 ways as following but all failed:
---------------------------1-------------------------
In /etc/cinder/cinder.conf:
quota_volumes = 100
[NG] restart cinder-api doesn't work.

---------------------------2-------------------------
>> cinder quota-update --volumes 100 admin
[NG] restart cinder-api doesn't work.

---------------------------3-------------------------
>> Tried modify the code of "/opt/stack/cinder/cinder/quota.py"
quota_opts = [
    cfg.IntOpt('quota_volumes',
               default=100,
               help='number of volumes allowed per project'),
    cfg.IntOpt('quota_snapshots',
               default=100,
               help='number of volume snapshots allowed per project'),
    cfg.IntOpt('quota_gigabytes',
               default=1000,
               help='number of volume gigabytes (snapshots are also included) '
                    'allowed per project'),

[NG] restart cinder-api doesn't work.

---------------------------GOOD-------------------------
Finally I found a workaround is:
>> cinder quota-update --volumes 100 54ce53cb1ed3422fa77d75134acd4c8e
Update the quota using project_id instead of project_name, it does work.

I believe this is a bug in "/opt/stack/cinder/cinder/quota.py"
def limit_check()
    ....
    project_id = context.project_id
    ....

and "/opt/stack/cinder/cinder/volume/flows/api/create_volume.py"#QuotaReserveTask

I hope someone who has more insight(who refactor the create volume task flow) could check how this defect got addressed.

Revision history for this message
Vincent Hou (houshengbo) wrote :

Hi Jerry,

Here is the link page introducing how the quota is changed for cinder http://docs.openstack.org/user-guide-admin/content/cli_set_quotas.html. And it is cerntain that the second way you are trying "cinder quota-update --volumes 100 admin" is not working, cos you need to specify the tenant id instead of tenant name.

I think this is not a bug, but a new workround maybe. If we change the quota in the conf file, then we change the quota after a successful restart. However, I am a bit concerned about it, because we may need a lot of validation. Fow example, if we reduce the quota to 20 volumes, and we have already had 21volumes. How shall we deal with this situation? A sucessful restart with warning? with error? or with ....? It gets a bit sticky.

Changed in cinder:
status: New → Incomplete
Revision history for this message
Jerry Cai (caimin) wrote :
Download full text (4.1 KiB)

Hi Vincent,
I believe this is a defect in cinder quota as especially in cinder I found this problem. I'm under admin tenant and operate volume on that, I'll show you more information:

Step 1. Show there has enough volumes quota for cinder:
# cinder quota-show admin
+--------------------------------------------+-------+
| Property | Value |
+--------------------------------------------+-------+
| gigabytes | 1000 |
| gigabytes_pvc:10_11_2_10:v7000stor default | -1 |
| snapshots | 10 |
| snapshots_pvc:10_11_2_10:v7000stor default | -1 |
| volumes | 100 |
| volumes_pvc:10_11_2_10:v7000stor default | -1 |
+--------------------------------------------+-------+

Step 2. But fail to create volume:
# cinder --os-username admin --os-tenant-name admin --os-password passw0rd create 1
ERROR: VolumeLimitExceeded: Maximum number of volumes allowed (10) exceeded (HTTP 413) (Request-ID: req-fa9df70b-de33-4eea-ab5c-ce2960c9d212)

Step 3. Show the quota for tenant "admin" by tenant_id, it shows only 10 supported:
# cinder quota-show 2e19e18e93f64c1781fe7d1de5ac784b
+--------------------------------------------+-------+
| Property | Value |
+--------------------------------------------+-------+
| gigabytes | -1 |
| gigabytes_pvc:10_11_2_10:v7000stor default | -1 |
| snapshots | -1 |
| snapshots_pvc:10_11_2_10:v7000stor default | -1 |
| volumes | 10 |
| volumes_pvc:10_11_2_10:v7000stor default | -1 |
+--------------------------------------------+-------+

Step 4. Update the volumes quota in tenant "2e19e18e93f64c1781fe7d1de5ac784b" (tenant id of admin):
# cinder quota-update --volumes 100 2e19e18e93f64c1781fe7d1de5ac784b

Step 5. Create volume again in "admin" tenant, succeeded:
# cinder --os-username admin --os-tenant-name admin --os-password passw0rd create 1
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2014-03-20T08:22:14.000000 |
| description | None |
| encrypted | False |
| id | 9365f0e0-8f5c-461a-aa3f-c079abb58512 |
| metadata | {} |
| name | None |
| os-vol-host-attr:host | None |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-...

Read more...

Changed in cinder:
assignee: nobody → Juan Manuel Ollé (juan-m-olle)
Revision history for this message
Juan Manuel Ollé (juan-m-olle) wrote :

What I see is that tenant name is not allow in the API, and tenant_id must be used.
On the other hand, no check is made about that tenant_id, that is the reason that you could use tenant name and no error is thrown. in fact if you use an invalid ID it will aparentry set the quota.

jmolle@jmolle-Controller:/opt/stack/cinder$ cinder quota-show invalid_uuid
+-----------+-------+
| Property | Value |
+-----------+-------+
| gigabytes | 1000 |
| snapshots | 10 |
| volumes | 10 |
+-----------+-------+
jmolle@jmolle-Controller:/opt/stack/cinder$ cinder quota-update --volumes 1 invalid_uuid
jmolle@jmolle-Controller:/opt/stack/cinder$ cinder quota-show invalid_uuid
+-----------+-------+
| Property | Value |
+-----------+-------+
| gigabytes | 1000 |
| snapshots | 10 |
| volumes | 1 |
+-----------+-------+

I don't know if there is a cinder reason to not check if the tenant id exist or if it is not a uuid (the tenant name case) to retrieve the correct id

Changed in cinder:
assignee: Juan Manuel Ollé (juan-m-olle) → nobody
Revision history for this message
Jerry Cai (caimin) wrote :

Hi Juan, I know set tenant-id rather than tenant-name could be a limitation. But as I remember it does work for previous release to set "quota_volumes" in cinder.conf but doesn't currently.

As in cinder.conf describes, set quota_volumes should work for all tenant, but actually even if I set it to 1000, it still keep raise VolumeLimitExceeded: Maximum number of volumes allowed (10) exceeded'. By this point of view, I think this should be a valid bug.

Revision history for this message
Jerry Cai (caimin) wrote :

any clue or same problem encountered for this problem, I can fix this but need some suggestion.

Revision history for this message
Duncan Thomas (duncan-thomas) wrote :

Having to use the project_id rather than any other identifier is a known issue. The fact that there is no validation on it is also a known issue though I don't know if there's a bug open for it. If there is a keystone API to validate a project_id then we should look at using it.

As to why changing the config file didn't work, can you dump out the quotas table from your cinder database and post it here please?

Revision history for this message
Mike Perez (thingee) wrote :
Revision history for this message
Jerry Cai (caimin) wrote :

These 2 problems still existed in latest Juno release:
1. If I want to update the quota for admin project, the following cli doesn't work:
>>cinder quota-update --volumes 1000 admin
have to do like this:
>>cinder quota-update --volumes 1000 c3213e4f0535415b91f7f2e5b2201c7e

2. Update quota in /etc/cinder/cinder.conf doesn't work, the only way to update the admin project quota is to do as No.1

Changed in cinder:
status: Incomplete → New
Jay Bryant (jsbryant)
Changed in cinder:
status: New → Confirmed
Ian Govett (igovett)
Changed in cinder:
assignee: nobody → Ian Govett (igovett)
Revision history for this message
Sean McGinnis (sean-mcginnis) wrote : Bug Cleanup

Closing stale bug. If this is still an issue please reopen.

Changed in cinder:
status: Confirmed → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.