Cannot launch an instance with IPv6 subnet

Bug #1044383 reported by Akihiro Motoki on 2012-08-31
30
This bug affects 6 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Medium
Amir Sadoughi

Bug Description

I failed to launch an instance with IPv6 subnet. I used Quantum, but it is not specific to Quantum integration.

[Command lines]
$ quantum net-create net2
$ quantum subnet-create --ip_version 6 net2 2001:db8:dead:beef::1/64
$ nova boot --image $IMAGE_ID --flavor 1 --nic net-id=f4230924-bd0b-48ae-88b0-8fa6a93b06cc server-v6
$ nova list
+--------------------------------------+-----------+--------+----------+
| ID | Name | Status | Networks |
+--------------------------------------+-----------+--------+----------+
| b458b54e-183c-4f25-83e7-c35e6ab43ef3 | server-v6 | ERROR | |
+--------------------------------------+-----------+--------+----------+

[nova-compute log]
2012-08-31 22:44:16 ERROR nova.compute.manager [req-d8f85019-50b8-4db3-9334-bf6137723fa7 demo demo] [instance: b458b54e-183c-4f25-83e7-c35e6ab43ef3] Instance failed to spawn
2012-08-31 22:44:16 TRACE nova.compute.manager [instance: b458b54e-183c-4f25-83e7-c35e6ab43ef3] Traceback (most recent call last):
2012-08-31 22:44:16 TRACE nova.compute.manager [instance: b458b54e-183c-4f25-83e7-c35e6ab43ef3] File "/opt/stack/nova/nova/compute/manager.py", line 773, in _spawn
2012-08-31 22:44:16 TRACE nova.compute.manager [instance: b458b54e-183c-4f25-83e7-c35e6ab43ef3] self._legacy_nw_info(network_info),
2012-08-31 22:44:16 TRACE nova.compute.manager [instance: b458b54e-183c-4f25-83e7-c35e6ab43ef3] File "/opt/stack/nova/nova/compute/manager.py", line 434, in _legacy_nw_info
2012-08-31 22:44:16 TRACE nova.compute.manager [instance: b458b54e-183c-4f25-83e7-c35e6ab43ef3] network_info = network_info.legacy()
2012-08-31 22:44:16 TRACE nova.compute.manager [instance: b458b54e-183c-4f25-83e7-c35e6ab43ef3] File "/opt/stack/nova/nova/network/model.py", line 338, in legacy
2012-08-31 22:44:16 TRACE nova.compute.manager [instance: b458b54e-183c-4f25-83e7-c35e6ab43ef3] raise exception.NovaException(message=msg)
2012-08-31 22:44:16 TRACE nova.compute.manager [instance: b458b54e-183c-4f25-83e7-c35e6ab43ef3] NovaException: v4 subnets are required for legacy nw_info
2012-08-31 22:44:16 TRACE nova.compute.manager [instance: b458b54e-183c-4f25-83e7-c35e6ab43ef3]
Instance failed to spawn

Akihiro Motoki (amotoki) wrote :

After I looked around nova flags for a while, I noticed I must set use_ipv6=True.
By setting use_ivp6 to True, the issue I reported is addressed. Thus I will change the status to 'invalid'.

One thing I want to know is why use_ipv6 is set False by default. Are there any reasons for it?

Changed in nova:
status: New → Invalid
Akihiro Motoki (amotoki) on 2012-09-04
Changed in nova:
status: Invalid → New
Akihiro Motoki (amotoki) wrote :

I rechecked and confirmed this bug happens even if use_ipv6 is True, so I reset the status to "New" from "Invalid".

Unlike the original bug description, IPv6 address is assigned to an instance but launching an instance failed.

command lines and nova-compute log => http://paste.openstack.org/show/20634/
nova.conf => http://paste.openstack.org/show/20633/

git|glance|master[1d91a4d]
git|horizon|master[6e949ac]
git|keystone|master[2759c22]
git|nova|master[0318efe]
git|noVNC|master[cbb55df]
git|python-glanceclient|master[e233f66]
git|python-keystoneclient|master[0a8c960]
git|python-novaclient|master[3a89425]
git|python-openstackclient|master[8010e77]
git|python-quantumclient|master[62f5089]
git|quantum|master[d8160e0]

Changed in nova:
status: New → Confirmed
importance: Undecided → Medium

Is this is covered by the following blueprint: https://blueprints.launchpad.net/quantum/+spec/ipv6-feature-parity ?

Changed in nova:
assignee: nobody → Amir Sadoughi (amir-sadoughi)
Amir Sadoughi (amir-sadoughi) wrote :

I was able to reproduce this bug in the following manner:

0. in nova.conf: "network_manager = nova.network.manager.FlatManager"
1. `nova network-create --fixed-range-v6 2001:db8:dead:beef::/64 ipv6net`
2. `nova boot --image $IMAGE --flavor $FLAVOR --nic net-id=$NET-ID new-server`
3. a. `nova list`. Observe new-server in ERROR state.
   b. `nova show`. fault is "{u'message': u'NoValidHost', u'code': 500, u'created': u'2013-01-30T17:38:57Z'}"
   c. Observe error "NovaException: v4 subnets are required for legacy nw_info" in nova-compute logs.

I will begin working on a fix.

Amir Sadoughi (amir-sadoughi) wrote :

From running `grep -nH -R -e "legacy()" .` on the code base I have come up with an approach. Let me know if I have missed anything. My current plan of attack is as follows:

0. Make each of the compute drivers compatible with the current network model (non-legacy), pushing any legacy dependency into the firewall driver.
From my enumeration that means: i. baremetal.BareMetalDriver <bug 1112642> ii. libvirt.LibvirtDriver <bug 1112646>, and iii. vmwareapi.VMwareESXDriver <bug 1112655>.

1. Remove legacy_nwinfo from the nova.virt.driver.ComputeDriver base class. This involves minor code changes (test code and stub drivers) once the previous step is completed.

2. Fix the firewall driver to use the current network model. <bug 1112657>

3. Fix any other legacy-dependent code to use the current network model (e.g. nova.virt.netutils.get_injected_network_template) <bug 1112659>

4. Remove the legacy function from the network model.

Fix proposed to branch: master
Review: https://review.openstack.org/22360

Changed in nova:
status: Confirmed → In Progress

Fix proposed to branch: master
Review: https://review.openstack.org/22373

Baodong (Robert) Li (baoli) wrote :

I had the same trouble today. Wonder what's going on with the review, and also which patch (22360 or 22373) is actually fixing the issue. thanks.

Amir Sadoughi (amir-sadoughi) wrote :

I've resumed working on the patch for havana, but progress has slowed as I haven't been able to allocate much time to it.

yuenanliu (yuenanl1989) on 2013-07-09
Changed in nova:
status: In Progress → Invalid
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers