Error creating IPv6 subnet on routed network segment

Bug #1832636 reported by Ryan Tidwell
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
neutron
New
Medium
Unassigned

Bug Description

I have a routed provider network with 2 segments. Prior to creating an IPv6 subnet on a segment, I have created IPv4 subnets and attached a router to the external network without issue. When I attempt to create an IPv6 subnet on a segment, the following occurs:

> openstack subnet create --subnet-pool ULA --prefix-length 64 --network-segment 89254a60-1d84-4162-9d89-d678e8103594 --network c96b20c7-0702-4efa-a980-4e918979300a --ip-version 6 --no-dhcp RACK_1_ULA

BadRequestException: 400: Client Error for url: http://controller:9696/v2.0/subnets, Invalid input for operation: Failed to create port on network c96b20c7-0702-4efa-a980-4e918979300a, because fixed_ips included invalid subnet 1237b25c-a357-40b2-9b5b-f311cd0bb075.

Interestingly enough, the subnet was created despite this error. The subnet ID referenced in the error message is the UUID of the newly created subnet:

> openstack subnet show 1237b25c-a357-40b2-9b5b-f311cd0bb075
+-------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| allocation_pools | fd00::2-fd00::ffff:ffff:ffff:ffff |
| cidr | fd00::/64 |
| created_at | 2019-06-12T19:04:53Z |
| description | |
| dns_nameservers | |
| enable_dhcp | False |
| gateway_ip | fd00::1 |
| host_routes | |
| id | 1237b25c-a357-40b2-9b5b-f311cd0bb075 |
| ip_version | 6 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| location | Munch({'cloud': '', 'region_name': '', 'zone': None, 'project': Munch({'id': 'ea4d84a4ec2449a691eba8e9a664996a', 'name': 'admin', 'domain_id': 'default', 'domain_name': None})}) |
| name | RACK_1_ULA |
| network_id | c96b20c7-0702-4efa-a980-4e918979300a |
| prefix_length | None |
| project_id | ea4d84a4ec2449a691eba8e9a664996a |
| revision_number | 0 |
| segment_id | 89254a60-1d84-4162-9d89-d678e8103594 |
| service_types | |
| subnetpool_id | 08b4162a-8035-4296-8047-a4f3ed41be9b |
| tags | |
| updated_at | 2019-06-12T19:04:53Z |
+-------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

When I went to the logs (debug is enabled), another issue I found is that there is no stack trace or anything to indicate an error, other than the following from tracing the request ID:

Jun 12 19:04:53 devstack-1 neutron-server[11973]: DEBUG neutron.api.v2.base [None req-742bf9e0-5f9c-42df-9f6f-991f3a56c9b6 admin admin] Request body: {'subnet': {'name': 'RACK_1_ULA', 'subnetpool_id': '08b4162a-8035-4296-8047-a4f3ed41be9b', 'ip_version': 6, 'prefixlen': '64', 'enable_dhcp': False, 'network_id': 'c96b20c7-0702-4efa-a980-4e918979300a', 'segment_id': '89254a60-1d84-4162-9d89-d678e8103594'}} {{(pid=11980) prepare_request_body /opt/stack/neutron/neutron/api/v2/base.py:715}}
Jun 12 19:04:53 devstack-1 neutron-server[11973]: DEBUG neutron.db.quota.driver [None req-742bf9e0-5f9c-42df-9f6f-991f3a56c9b6 admin admin] Resources subnetpool have unlimited quota limit. It is not required to calculate headroom {{(pid=11980) make_reservation /opt/stack/neutron/neutron/db/quota/driver.py:222}}
Jun 12 19:04:53 devstack-1 neutron-server[11973]: DEBUG neutron.quota.resource [None req-742bf9e0-5f9c-42df-9f6f-991f3a56c9b6 admin admin] Usage tracker for resource:subnet and tenant:ea4d84a4ec2449a691eba8e9a664996a is out of sync, need to count used quota {{(pid=11980) count_used /opt/stack/neutron/neutron/quota/resource.py:273}}
Jun 12 19:04:53 devstack-1 neutron-server[11973]: DEBUG neutron.quota.resource [None req-742bf9e0-5f9c-42df-9f6f-991f3a56c9b6 admin admin] Quota usage for subnet was recalculated. Used quota:4. {{(pid=11980) count_used /opt/stack/neutron/neutron/quota/resource.py:293}}
Jun 12 19:04:53 devstack-1 neutron-server[11973]: DEBUG neutron.db.quota.driver [None req-742bf9e0-5f9c-42df-9f6f-991f3a56c9b6 admin admin] Attempting to reserve 1 items for resource subnet. Total usage: 4; quota limit: 100; headroom:96 {{(pid=11980) make_reservation /opt/stack/neutron/neutron/db/quota/driver.py:254}}
Jun 12 19:04:53 devstack-1 neutron-server[11973]: DEBUG neutron.pecan_wsgi.hooks.quota_enforcement [None req-742bf9e0-5f9c-42df-9f6f-991f3a56c9b6 admin admin] Made reservation on behalf of ea4d84a4ec2449a691eba8e9a664996a for: {'subnet': 1} {{(pid=11980) before /opt/stack/neutron/neutron/pecan_wsgi/hooks/quota_enforcement.py:55}}
Jun 12 19:04:53 devstack-1 neutron-server[11973]: DEBUG neutron_lib.callbacks.manager [None req-742bf9e0-5f9c-42df-9f6f-991f3a56c9b6 admin admin] Notify callbacks ['neutron.services.segments.plugin.SegmentHostRoutes.host_routes_before_create--9223372036850996506'] for subnet, before_create {{(pid=11980) _notify_loop /usr/lib/python3.6/site-packages/neutron_lib/callbacks/manager.py:193}}
Jun 12 19:04:54 devstack-1 neutron-server[11973]: DEBUG neutron_lib.callbacks.manager [None req-742bf9e0-5f9c-42df-9f6f-991f3a56c9b6 admin admin] Notify callbacks ['neutron.services.l3_router.l3_router_plugin.DVRResourceOperationHandler._handle_distributed_migration--9223372036850959293', 'neutron.services.l3_router.l3_router_plugin.L3_HA_NAT_db_mixin._validate_migration--9223372036851001076', 'neutron.services.l3_router.service_providers.driver_controller.DriverController._update_router_provider--9223372036854656083'] for router, precommit_update {{(pid=11980) _notify_loop /usr/lib/python3.6/site-packages/neutron_lib/callbacks/manager.py:193}}
Jun 12 19:04:55 devstack-1 neutron-server[11973]: DEBUG neutron_lib.callbacks.manager [None req-742bf9e0-5f9c-42df-9f6f-991f3a56c9b6 admin admin] Notify callbacks ['neutron.plugins.ml2.plugin.SecurityGroupDbMixin._ensure_default_security_group_handler--9223372036830325123'] for port, before_update {{(pid=11980) _notify_loop /usr/lib/python3.6/site-packages/neutron_lib/callbacks/manager.py:193}}
Jun 12 19:04:55 devstack-1 neutron-server[11973]: INFO neutron.pecan_wsgi.hooks.translation [None req-742bf9e0-5f9c-42df-9f6f-991f3a56c9b6 admin admin] POST failed (client error): The server could not comply with the request since it is either malformed or otherwise incorrect.
Jun 12 19:04:55 devstack-1 neutron-server[11973]: DEBUG neutron.pecan_wsgi.hooks.notifier [None req-742bf9e0-5f9c-42df-9f6f-991f3a56c9b6 admin admin] No notification will be sent due to unsuccessful status code: 400 {{(pid=11980) after /opt/stack/neutron/neutron/pecan_wsgi/hooks/notifier.py:79}}
Jun 12 19:04:55 devstack-1 neutron-server[11973]: INFO neutron.wsgi [None req-742bf9e0-5f9c-42df-9f6f-991f3a56c9b6 admin admin] 192.168.1.8 "POST /v2.0/subnets HTTP/1.1" status: 400 len: 458 time: 2.2952359

For some reason, the source of the malformed request error is not obvious.

To summarize:

- IPv6 subnet creation on an external network segment returns an error to the client, but the subnet gets created
- The client alerts the user to a port creation failure due to having the newly created subnet passed in the 'fixed_ips' of the port create request (what port is even being created during a subnet create operation????)
- The mysterious port creation transaction seems to think the newly created subnet is invalid, which leads me to believe there is an issue with DB transaction boundaries
- The actual source of the error is hidden, even with debug enabled

Revision history for this message
Ryan Tidwell (ryan-tidwell) wrote :

I uncovered another clue to this in the L3 agent logs. As the L3 agent is attempting to create a FIP gateway port, it encounters this error http://paste.openstack.org/show/752840/ . The error seems to be coming out of the IPAM subsystem, and this could be pointing us at where API calls are failing.

Revision history for this message
Miguel Lavalle (minsel) wrote :

I don't think the problem is the addition of an IPv6 subnet itself. I use this Vagrant script to deploy routed networks development labs: https://github.com/miguellavalle/routednetworksvagrant. As you an see here, it adds IPv6 subnets to the routed network here: https://github.com/miguellavalle/routednetworksvagrant/blob/057e0d9e372c1169463cc49f4d2a1cd0cf82141a/provisioning/setup-allinone.sh#L142-L149. I used this Vagrant script to deploy a lab earlier this week without a problem.

IIUC what you are saying above, the difference is that you are up-linking the IPv4 subnet associated to the routed network to an external network with a router. is that correct?

Changed in neutron:
importance: Undecided → Medium
Revision history for this message
Miguel Lavalle (minsel) wrote :

Since up to this point we didn't assume floating ips and routers with routed networks, this hasn't been tested.

Revision history for this message
Ryan Tidwell (ryan-tidwell) wrote :

There aren't any floating IP's in my setup, I'm working on that spec :) I do have a router using the routed network as its gateway network. All connectivity works as expected (BGP, etc.). My provider network is an external network, and I am trying to make it dual-stack. As I mentioned in the original report IPv4 works fine, but I encounter this error when attempting to add IPv6 subnets to these segments that have IPv4 subnets on them.

Revision history for this message
YAMAMOTO Takashi (yamamoto) wrote :

> what port is even being created during a subnet create operation????

looking at the code, the "Failed to create port on network" error can happen for update_port as well.
i guess it's the case for your situation.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.