Comment 9 for bug 1353962

Revision history for this message
Matt Riedemann (mriedem) wrote : Re: Test job failes with FixedIpLimitExceeded with nova network

Taking a look at the code in this stacktrace:

2014-10-06 08:00:23.848 DEBUG nova.network.manager [req-bf60b9d9-452c-4fbc-9f7a-cc90ed762536 ServersAdminTestJSON-1220859943 ServersAdminTestJSON-233825707] [instance: 50156a3a-8b14-40ea-affe-97e5a510ec32] Allocate fixed ip on network cbcee1f9-13d7-4ec6-a450-d777a3d11120 allocate_fixed_ip /opt/stack/new/nova/nova/network/manager.py:859
2014-10-06 08:00:23.874 DEBUG nova.network.manager [req-bf60b9d9-452c-4fbc-9f7a-cc90ed762536 ServersAdminTestJSON-1220859943 ServersAdminTestJSON-233825707] Quota exceeded for 3292f7b35abf4565a53d099ad878a335, tried to allocate fixed IP allocate_fixed_ip /opt/stack/new/nova/nova/network/manager.py:875
2014-10-06 08:00:23.874 ERROR oslo.messaging.rpc.dispatcher [req-bf60b9d9-452c-4fbc-9f7a-cc90ed762536 ServersAdminTestJSON-1220859943 ServersAdminTestJSON-233825707] Exception during message handling: Maximum number of fixed ips exceeded
2014-10-06 08:00:23.874 25609 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last):
2014-10-06 08:00:23.874 25609 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply
2014-10-06 08:00:23.874 25609 TRACE oslo.messaging.rpc.dispatcher incoming.message))
2014-10-06 08:00:23.874 25609 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, in _dispatch
2014-10-06 08:00:23.874 25609 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args)
2014-10-06 08:00:23.874 25609 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 123, in _do_dispatch
2014-10-06 08:00:23.874 25609 TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args)
2014-10-06 08:00:23.874 25609 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/network/floating_ips.py", line 114, in allocate_for_instance
2014-10-06 08:00:23.874 25609 TRACE oslo.messaging.rpc.dispatcher **kwargs)
2014-10-06 08:00:23.874 25609 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/network/manager.py", line 511, in allocate_for_instance
2014-10-06 08:00:23.874 25609 TRACE oslo.messaging.rpc.dispatcher requested_networks=requested_networks)
2014-10-06 08:00:23.874 25609 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/network/manager.py", line 192, in _allocate_fixed_ips
2014-10-06 08:00:23.874 25609 TRACE oslo.messaging.rpc.dispatcher vpn=vpn, address=address)
2014-10-06 08:00:23.874 25609 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/network/manager.py", line 876, in allocate_fixed_ip
2014-10-06 08:00:23.874 25609 TRACE oslo.messaging.rpc.dispatcher raise exception.FixedIpLimitExceeded()
2014-10-06 08:00:23.874 25609 TRACE oslo.messaging.rpc.dispatcher FixedIpLimitExceeded: Maximum number of fixed ips exceeded
2014-10-06 08:00:23.874 25609 TRACE oslo.messaging.rpc.dispatcher

We're using the FlatDHCPManager:

http://logs.openstack.org/98/125398/11/check/check-tempest-dsvm-postgres-full/f0d9495/logs/etc/nova/nova.conf.txt.gz

So we're using DB quotas (VlanManager uses the no-op quota driver).

We could be logging the wrong project_id:

LOG.debug("Quota exceeded for %s, tried to allocate "
                      "fixed IP", context.project_id)

fixed_ips is a per-project quota in the DB API:

PER_PROJECT_QUOTAS = ['fixed_ips', 'floating_ips', 'networks']

So the user_id in this case doesn't matter. I'm wondering if the actual project id used is the admin tenant? I don't see 3292f7b35abf4565a53d099ad878a335 in tempest.conf though so it's probably not the admin tenant:

http://logs.openstack.org/98/125398/11/check/check-tempest-dsvm-postgres-full/f0d9495/logs/tempest_conf.txt.gz

From talking with Matt Treinish, it sounds like this test creates a tenant-isolated user/project with an admin role:

http://git.openstack.org/cgit/openstack/tempest/tree/tempest/api/compute/admin/test_servers.py#n29

So yeah, we have an admin role and we're using a per-project resource for quotas (fixed_ips), so we need to find out what the quota limit is per project in a tempest run.

I don't see the quota driver set in nova.conf so it defaults to the DB quota driver:

http://logs.openstack.org/98/125398/11/check/check-tempest-dsvm-postgres-full/f0d9495/logs/etc/nova/nova.conf.txt.gz

But what are the limits? Because the defaults are 10. I don't see anything in devstack that changes quotas unless the virt driver is the fake driver, which isn't the case here.