Host is accessible from instance using Linux bridge IPv6 address

Bug #1302080 reported by Darragh O'Reilly on 2014-04-03
24
This bug affects 3 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Medium
Brian Haley
OpenStack Security Advisory
Undecided
Unassigned
neutron
Medium
Brian Haley

Bug Description

Opening this as a security bug just in case - but I doubt it is.

If a compute node has enabled auto configuration for IPv6 addresses on all interfaces, then the Linux bridges used for connecting instances will get IPv6 addresses too. Then an instance can reach the host using the address of the bridge it is connected to.

Eg with the ovs-agent and hybrid VIF driver after booting an instance in devstack connected to the "private" network:

vagrant@devstack:~$ brctl show
bridge name bridge id STP enabled interfaces
br-ex 0000.9619b7f0614b no qg-97601dc1-77
br-int 0000.cad7ebe11e46 no qr-edf68f52-f9
       qvoe8eabd6a-46
       tap09437e57-45
qbre8eabd6a-46 8000.0e8e27c7cdfa no qvbe8eabd6a-46
       tape8eabd6a-46

vagrant@devstack:~$ ip address show dev qbre8eabd6a-46
15: qbre8eabd6a-46: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 0e:8e:27:c7:cd:fa brd ff:ff:ff:ff:ff:ff
    inet6 fe80::dcc6:30ff:fe27:37a1/64 scope link
       valid_lft forever preferred_lft forever

Note: the address fe80::dcc6:30ff:fe27:37a1 and login to instance:

$ ssh ubuntu@172.24.4.3
ubuntu@vm1:~$ ip address show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:16:3e:d1:7e:fe brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.9/24 brd 10.0.0.255 scope global eth0
    inet6 fe80::f816:3eff:fed1:7efe/64 scope link
       valid_lft forever preferred_lft forever

ubuntu@vm1:~$ ping6 -c4 -I eth0 ff02::1
PING ff02::1(ff02::1) from fe80::f816:3eff:fed1:7efe eth0: 56 data bytes
64 bytes from fe80::f816:3eff:fed1:7efe: icmp_seq=1 ttl=64 time=16.9 ms
64 bytes from fe80::dcc6:30ff:fe27:37a1: icmp_seq=1 ttl=64 time=38.4 ms (DUP!)
64 bytes from fe80::f816:3eff:fed1:7efe: icmp_seq=2 ttl=64 time=1.44 ms
64 bytes from fe80::dcc6:30ff:fe27:37a1: icmp_seq=2 ttl=64 time=3.88 ms (DUP!)
64 bytes from fe80::f816:3eff:fed1:7efe: icmp_seq=3 ttl=64 time=8.63 ms
64 bytes from fe80::dcc6:30ff:fe27:37a1: icmp_seq=3 ttl=64 time=14.0 ms (DUP!)
64 bytes from fe80::f816:3eff:fed1:7efe: icmp_seq=4 ttl=64 time=0.476 ms

ubuntu@vm1:~$ ping6 -c1 -I eth0 fe80::dcc6:30ff:fe27:37a1
PING fe80::dcc6:30ff:fe27:37a1(fe80::dcc6:30ff:fe27:37a1) from fe80::f816:3eff:fed1:7efe eth0: 56 data bytes
64 bytes from fe80::dcc6:30ff:fe27:37a1: icmp_seq=1 ttl=64 time=2.86 ms

ubuntu@vm1:~$ ssh fe80::dcc6:30ff:fe27:37a1%eth0
The authenticity of host 'fe80::dcc6:30ff:fe27:37a1%eth0 (fe80::dcc6:30ff:fe27:37a1%eth0)' can't be established.
ECDSA key fingerprint is 11:5d:55:29:8a:77:d8:08:b4:00:9b:a3:61:93:fe:e5.
Are you sure you want to continue connecting (yes/no)?

I thought the anti-spoof rules should block packets from the fe80 address, but looking at the ip6tables-save (attached) the spoof chain and its default DROP rule are missing. That must be because there is no IPv6 subnet on the "private" network - maybe that's another problem. I inserted them manually, but that did not work becuase these packets hit the host's INPUT chain and the security group filters are on the FORWARD chain.

So maybe all that is needed is a note in the doc to say that auto config should be disabled by default and selectively enabled on interfaces if needed. E.g.:

net.ipv6.conf.all.autoconf=0
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
# enable on lo and eth1
net.ipv6.conf.lo.disable_ipv6=0
net.ipv6.conf.eth1.disable_ipv6=0

Or maybe the VIF drivers should disable IPv6 on the bridge when creating it.

description: updated
description: updated
Jeremy Stanley (fungi) on 2014-04-03
Changed in ossa:
status: New → Incomplete
Jeremy Stanley (fungi) wrote :

What releases of Neutron are affected by this bug?

Neutron core security reviewers, any opinion on the scope and exploitability of this?

Any releases that have instances plugged into Linux bridges. I have tried with master and the linuxbridge-agent and the ovs-agent. The ovs-agent needs to have Nova configured to use the hybrid VIF driver.

I think it's something that should be documented in the security guide. You would not want tenants to be able to do this for the very same reason you don't give them access to a management network. I just opened it as security bug to be on the safe side.

Jeremy Stanley (fungi) wrote :

Thanks--I agree this sounds more like a configuration mistake we should warn deployers/operators against making, possibly in the Security Guide or an OSSN. If there are no objections from the Neutron core security reviewers, we should switch this to a public bug and add the security tag to bring it to the attention of the OSSG in case they want to document it.

Thierry Carrez (ttx) wrote :

neutron core-sec: please confirm we can open this one.

The host should never be reachable from an instance.
This is therefore a violation of tenant isolation principle.

However if there is no way this can be exploited to access services running on the host (assuming the host has been secured properly) then I think it's ok to open this bug.

Under the assumption that he host is properly secured I don't think there's any possible exploit, but I'd wait for other people to comment.

Thierry Carrez (ttx) wrote :

this bug should be fixed openly as a strengthening measure.

information type: Private Security → Public
Changed in ossa:
status: Incomplete → Won't Fix
Changed in neutron:
importance: Undecided → Medium
status: New → Confirmed
Changed in neutron:
assignee: nobody → Aniruddha Singh Gautam (aniruddha-gautam)
tags: added: ipv6
Sridhar Gaddam (sridhargaddam) wrote :

AFAIU the following patch in nova would address the issue reported in the bug - https://review.openstack.org/#/c/198054

Matt Riedemann (mriedem) on 2015-09-22
tags: added: network
Changed in nova:
status: New → In Progress
assignee: nobody → Adam Kacmarsky (adam-kacmarsky)
importance: Undecided → Medium
assignee: Adam Kacmarsky (adam-kacmarsky) → Brian Haley (brian-haley)
Changed in nova:
assignee: Brian Haley (brian-haley) → Rawlin Peters (rawlin-peters)

Fix proposed to branch: master
Review: https://review.openstack.org/241076

Changed in neutron:
assignee: Aniruddha Singh Gautam (aniruddha-gautam) → Brian Haley (brian-haley)
status: Confirmed → In Progress

Reviewed: https://review.openstack.org/241076
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=404eaead793b3192982ae247685970973609be1f
Submitter: Jenkins
Branch: master

commit 404eaead793b3192982ae247685970973609be1f
Author: Brian Haley <email address hidden>
Date: Mon Nov 2 22:04:11 2015 -0500

    Disable IPv6 on bridge devices in LinuxBridgeManager

    We don't want to create a bridge device with an IPv6 address because
    it will see the Router Advertisement from Neutron.

    Change-Id: If59a823804d3477c5d8877f46fcc4c018af57a5a
    Closes-bug: 1302080

Changed in neutron:
status: In Progress → Fix Committed
Changed in neutron:
status: Fix Committed → Fix Released
Changed in nova:
assignee: Rawlin Peters (rawlin-peters) → Brian Haley (brian-haley)

Reviewed: https://review.openstack.org/198054
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=5ab1b1b1c456b8b43edbd1bddd74b96b56ab80e6
Submitter: Jenkins
Branch: master

commit 5ab1b1b1c456b8b43edbd1bddd74b96b56ab80e6
Author: Adam Kacmarsky <email address hidden>
Date: Thu Jul 2 10:13:16 2015 -0600

    Disable IPv6 on bridge devices

    The qbr bridge should not have any IPv6 addresses, either
    link-local, or on the tenant's private network due to the
    bridge processing Router Advertisements from Neutron and
    auto-configuring addresses, since it will allow access to
    the hypervisor from a tenant VM.

    The bridge only exists to allow the Neutron security group
    code to work with OVS, so we can safely disable IPv6 on it.

    Closes-bug: 1470931
    Partial-bug: 1302080

    Change-Id: Ideecab1c21b240bcca71973ed74b0374afb20e5e

Reviewed: https://review.openstack.org/274796
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=44401727235c5a9736c4229f7fc581e6a970ff91
Submitter: Jenkins
Branch: stable/liberty

commit 44401727235c5a9736c4229f7fc581e6a970ff91
Author: Adam Kacmarsky <email address hidden>
Date: Thu Jul 2 10:13:16 2015 -0600

    Disable IPv6 on bridge devices

    The qbr bridge should not have any IPv6 addresses, either
    link-local, or on the tenant's private network due to the
    bridge processing Router Advertisements from Neutron and
    auto-configuring addresses, since it will allow access to
    the hypervisor from a tenant VM.

    The bridge only exists to allow the Neutron security group
    code to work with OVS, so we can safely disable IPv6 on it.

    Closes-bug: 1470931
    Partial-bug: 1302080

    Conflicts:
     nova/tests/unit/virt/libvirt/test_vif.py

    Change-Id: Ideecab1c21b240bcca71973ed74b0374afb20e5e
    (cherry picked from commit 5ab1b1b1c456b8b43edbd1bddd74b96b56ab80e6)

tags: added: in-stable-liberty

Reviewed: https://review.openstack.org/271373
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=a381aa07d9c0ea586b649420643b4f91b65979d8
Submitter: Jenkins
Branch: stable/liberty

commit a381aa07d9c0ea586b649420643b4f91b65979d8
Author: Brian Haley <email address hidden>
Date: Mon Nov 2 22:04:11 2015 -0500

    Disable IPv6 on bridge devices in LinuxBridgeManager

    We don't want to create a bridge device with an IPv6 address because
    it will see the Router Advertisement from Neutron.

    Conflicts:
     neutron/agent/linux/bridge_lib.py

    Change-Id: If59a823804d3477c5d8877f46fcc4c018af57a5a
    Closes-bug: 1302080
    (cherry picked from commit 404eaead793b3192982ae247685970973609be1f)

Brian Haley (brian-haley) wrote :

The fix to nova has been released, but was tagged with "partial-bug" based on a review comment, it should have stayed "closes-bug" to automatically update this from the infra scripts.

So I will be marking this "fix released" accordingly.

Changed in nova:
status: In Progress → Fix Released
Dustin Lundquist (dlundquist) wrote :

It seems like we should fix this in kilo as well, but the fix depends on BridgeManager changes in I4b9d755677bba0d487a261004d9ba9b11116101f. Is it worth a new patch to explicitly disable IPv6 in ensure_bridge() for Kilo?

Tore Anderson (toreanderson) wrote :
Download full text (3.3 KiB)

Is this bug really fixed? Running Mitaka, it seems not. Using linuxbridges in combination with vxlan, only the vxlan interface gets disable_ipv6=1 set, not the bridge one.

This is from the compute node when booting up an instance (first one on that particular network, so all the interfaces must be provisioned):

# egrep 'brq3cd6a5c8-ec|disable_ipv6' linuxbridge-agent.log
2016-04-18 13:08:41.701 5916 DEBUG neutron.agent.linux.utils [req-1926075e-555e-4363-ab24-4c93d0b5c989 - - - - -] Running command (rootwrap daemon): ['sysctl', '-w', 'net.ipv6.conf.vxlan-65601.disable_ipv6=1'] execute_rootwrap_daemon /usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:100
2016-04-18 13:08:41.710 5916 DEBUG neutron.agent.linux.utils [req-1926075e-555e-4363-ab24-4c93d0b5c989 - - - - -] Running command (rootwrap daemon): ['ip', 'link', 'set', 'brq3cd6a5c8-ec', 'up'] execute_rootwrap_daemon /usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:100
2016-04-18 13:08:41.714 5916 DEBUG neutron.agent.linux.utils [req-1926075e-555e-4363-ab24-4c93d0b5c989 - - - - -] Running command (rootwrap daemon): ['brctl', 'addif', 'brq3cd6a5c8-ec', 'vxlan-65601'] execute_rootwrap_daemon /usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:100
2016-04-18 13:08:41.729 5916 DEBUG neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [req-1926075e-555e-4363-ab24-4c93d0b5c989 - - - - -] Skip adding device tap323ae2d2-4b to brq3cd6a5c8-ec. It is owned by compute:None and thus added elsewhere. _add_tap_interface /usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:472

As you can see, disable_ipv6 gets set on the vxlan interface, but not the bridge (nor the tap interface for that matter).

And lo and behold, the bridge interface has acquired an global ipv6 address (because there's a neutron router/L3 agent attached to the network with ipv6-address-mode=slaac+ipv6-ra-mode=slaac):

# ip address list dev brq3cd6a5c8-ec
18: brq3cd6a5c8-ec: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue state UP
    link/ether b6:e4:c1:aa:90:70 brd ff:ff:ff:ff:ff:ff
    inet6 2001:db8:123:456:b4e4:c1ff:feaa:9070/64 scope global mngtmpaddr dynamic
       valid_lft 86365sec preferred_lft 14365sec
    inet6 fe80::50b2:4bff:fe32:2a3c/64 scope link
       valid_lft forever preferred_lft forever

Furthermore, I'd like to stress that this IPv6 address is *GLOBALLY REACHABLE*! Yes, that means that anyone anywhere on the IPv6 Internet (including the instances themselves) can initiate, e.g., SSH connections *directly* to the compute node - even if it's behind a firewall or are using only private RFC1918 addresses or whatever. These packets will look just like normal VXLAN packets coming from the L3 agent, so they'll bypass any normal network-level protection.

One workaround is to set /proc/sys/net/ipv6/conf/default/disable_ipv6=1. That causes the kernel to ensure that all the relevant devices (vxlan, bridge, tap) gets created with IPv6 disabled by default. However, if you do want IPv6 on other unrelated interfaces (e.g., for management of the compute node itself or to carry vxlan traffic) this could be problem...

Read more...

Tore Anderson (toreanderson) wrote :

I've found that my suggested workaround to set /proc/sys/net/ipv6/conf/default/disable_ipv6=1 must only be run on the compute nodes. If it's run on the network nodes, then neutron-l3-agent flat out refuses to configure any IPv6 connectivity on the routers (even though the sysctl is set to 0 inside the qrouter network namespaces). See https://github.com/openstack/neutron/blob/master/neutron/common/ipv6_utils.py#L51-L64

However it seems that the setting is not necessary on the network nodes in any case, disable_ipv6 does get set to 1 on the linuxbridge devices there by something (I have not attempted to figure out what, exactly). It is only on the compute nodes that I need to set disable_ipv6=1 to avoid the global IPv6 address from being configured on the linuxbridges.

Tore Anderson (toreanderson) wrote :

Ok, so setting default/disable_ipv6=1 is *not* a viable solution, not even on the compute nodes. The reason: neutron-linuxbridge-agent will (just like neutron-l3-agent) end up believing that IPv6 is completely disabled on the system, and skip applying the IPv6 security group when plumbing an instance. The instance thus ends up being completely wide open from the global IPv6 Internet. Not good.

Setting default/accept_ra=0 seems like a better solution, as this will at the very least stop the services running directly on the compute node from being globally reachable. However it will not prevent the Linux kernel from auto-configuring a link-local address on the bridge device, which in turn is directly reachable from the instances without any kind of firewalling. This bug is in other words *NOT* fixed in Mitaka, as far as I can tell.

Tore Anderson (toreanderson) wrote :

As I've mentioned in my previous three comments, this bug is *not* fixed. It has been erroneously marked as fixed. Can someone with the appropriate access please re-open it? Otherwise I suppose I'll just have to submit a new duplicate issue.

Brian Haley (brian-haley) wrote :

Tore, so it looks like you're using linuxbridge, which I will admit I don't typically run. I'll try and get a config with that up and running.

FYI, I can't reproduce what you're seeing using OVS.

Config:
- Ubuntu 16.04
- single-node devstack
- neutron w/OVS configured with OVSHybridIptablesFirewallDrive
- DVR enabled

Boot a single VM

qbr bridge is created for hybrid plugging, which was where the IPv6 link-local address was being configured.

$ sysctl net.ipv6.conf.qbr74767d3d-4a.disable_ipv6
net.ipv6.conf.qbr74767d3d-4a.disable_ipv6 = 1

$ ip a s qbr74767d3d-4a
15: qbr74767d3d-4a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether ce:93:48:a0:42:c0 brd ff:ff:ff:ff:ff:ff

So there's no addresses there at all.

From inside the VM I can ping the all-nodes multicast address:

$ ping6 -I eth0 ff02::1
PING ff02::1 (ff02::1): 56 data bytes
ping6: can't bind to interface eth0: Operation not permitted
64 bytes from fe80::f816:3eff:fec0:a651: seq=0 ttl=64 time=2.408 ms
64 bytes from fe80::f816:3eff:fe92:17e9: seq=0 ttl=64 time=5.795 ms (DUP!)
64 bytes from fe80::f816:3eff:fe41:6efc: seq=0 ttl=64 time=7.924 ms (DUP!)
64 bytes from fe80::f816:3eff:fe6b:f22: seq=0 ttl=64 time=8.231 ms (DUP!)
64 bytes from fe80::f816:3eff:fe7d:2e94: seq=0 ttl=64 time=10.130 ms (DUP!)
64 bytes from fe80::f816:3eff:fe5e:b721: seq=0 ttl=64 time=10.410 ms (DUP!)

Each address is something that responded:

fe80::f816:3eff:fec0:a651 - the VM itself
fe80::f816:3eff:fe92:17e9 - dhcp server
fe80::f816:3eff:fe41:6efc - router (distributed interface - v4 subnet, it's local to the VM)
fe80::f816:3eff:fe6b:f22 - router (centralized snat interface - v4 subnet)
fe80::f816:3eff:fe7d:2e94 - router (distributed interface - v6 subnet)
fe80::f816:3eff:fe5e:b721 - router (centralized snat interface - v6 subnet)

I tried to ssh to all of those addresses from the neutron infrastructure and couldn't - connection refused.

Tore Anderson (toreanderson) wrote :

Hi Brian,

I am fairly certain that using linuxbridge is a prerequisite for reproducing the issue I'm seeing. I'm not sure if multiple nodes are required (I'm not familiar with devstack), but that's what I have at least - compute nodes with nova separate from network nodes with neutron-l3-agent and neutron-dhcp-agent, with neutron-linuxbridge-agent on both terminating vxlan tunnels between them.

Brian Haley (brian-haley) wrote :

Tore - things seem to work fine on a single-node devstack linuxbridge config, the VM can only "see" the router and dhcp agent addresses. And I don't see any IPv6 addresses on the brq interfaces. I also see disable_ipv6=1 being set on the brq devices, which you didn't see, I don't know why that is.

I just don't know how quickly I'll get a mult-node setup running.

Tore Anderson (toreanderson) wrote :

Hmm. Thinking a bit more about it, I think you'll probably need a multi-node setup. As I observed in comment #19, on the network nodes the disable_ipv6 sysctl *does* get set, it's only on the compute nodes where it does not. If you have everything running on a single node, I'm guessing that whatever it is that is responsible for setting disable_ipv6 correctly on my network nodes is saving the day for you on your hybrid network+compute node.

The nova patch fixed it for ovs. But nova uses this with neutron linuxbridge (not sure if nova-network uses it too)
https://github.com/openstack/nova/blob/stable/mitaka/nova/network/linux_net.py#L1616

You should be able to get nova to create the bridge on an all-in-one by booting a vm to a subnet with dhcp disabled and no attached routers.

Brian Haley (brian-haley) wrote :

Tore, I think I've found the code path, perhaps you could confirm based on the debug messages in your log files.

The nova vif driver (nova/virt/libvirt/vif.py) plug_bridge() method will be called, which will ensure the bridge exists. For this it will call over to ensure_bridge() in nova/network/linux_net.py. That code does not disable ipv6 :(

The hybrid case used when OVS is the bridge calls plug_ovs_hybrid(), which uses the code that was added to disable ipv6.

That's my guess without actually setting it up, so might just be a one-line change?

Tore Anderson (toreanderson) wrote :

Attached are the results from when reproducing the bug with debugging output enabled. It's from a previously unused compute node (no instances nor any virtual networks running), and then I did "nova boot" to fire up an instance.

As you can see, the auto-created brq device on the compute node gets configured with a global IPv6 address and an IPv6 default route. This address is reachable from anywhere in the world, bypassing any network firewalls or anything else that would protect the compute node from unauthorised access. I've therefore chosen to anonymise the addresses in the output.

From reading the previous comments on this issue, it seems that nobody realised that the brq devices would obtain global IPv6 connectivity if there is an IPv6 router on the network. In all likelihood this was the case for the fixed OVS part of the bug as well. This aspect significantly exposes the exposure to possible unauthorised access to the compute node, so it might be wise to reconsider whether or not this should be considered a security issue.

Anyway. While the vxlan device does get disable_ipv6 set, the tap device does not. It therefore auto-configures a link-local IPv6 address, but not a global one or a default route. Presumably the Linux kernel will not process RAs on devices that are members of a bridge device. So while this might not be a problem per se, but I think that disable_ipv6 should be set on the tap device anyway as a precaution. There is no reason at all to retain active layer-3 configuration on any of these interfaces, as far as I can tell

Brian Haley (brian-haley) wrote :

Tore, the debug output from the nova logs (cpu) would be most helpful, as that is where the brq device is created and configured.

And we did realize the qvb-* devices created on an OVS hybrid plug would auto-configure an IPv6 address since it would process the RA. But if the prefix assigned is only tenant-scope (i.e. some 2001:db8 prefix they are testing with) then there is not Internet reachability, it is mainly that the VM has access to the compute node.

I do not feel we need to worry about the tap devices, can the VM ping that address? And my single-node has disable_ipv6=1 on the taps, I don't know why you are seeing different yet.

Tore Anderson (toreanderson) wrote :

Attaching debug log from nova-compute, as requested.

While it's true that a 2001:db8 prefix used for testing wouldn't be globally available, that's kind of besides the point, I think. Nobody would use 2001:db8-prefixes in production - there's no NAT or floating IPs in IPv6, so any production deployment of IPv6 will necessarily use globally reachable prefixes and likely RAs with the A-flag set, and thus be vulnerable to unauthorised access to the compute node.

I've tested briefly and I have not been successful in accessing the link-local address on the tap device from the instance or from bare-metal hosts outside of OpenStack residing on the same network. There's no response to ICMPv6 neighbour solicitations, and if I configure a static neighbour entry on the instance or the bare-metal host with the MAC address of the tap or brq device, the packets simply go unanswered. However, considering that the tap device is there only to provide forwarding at layer-2, it does strike me as wrong that there is active layer-3 configuration on it. For all I know, the fact that I cannot reach the link-local address from the instance is dependent on logic in the Linux kernel of the compute node which could change in the future. Therefore I think it would be prudent to set the disable_ipv6 sysctl on this device as well. Considering that disable_ipv6 does get set on the network node, it also seems more consistent that the same thing should happen on the compute nodes.

Brian Haley (brian-haley) wrote :

Thanks for the log, it shows the path I assumed from comment #27:

DEBUG nova.virt.libvirt.vif Ensuring bridge brq30861ce7-0a plug_bridge

DEBUG nova.network.linux_net Starting Bridge brq30861ce7-0a ensure_bridge

DEBUG oslo_concurrency.processutils Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf brctl addbr brq30861ce7-0a execute

DEBUG oslo_concurrency.processutils Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf brctl setfd brq30861ce7-0a 0 execute

DEBUG oslo_concurrency.processutils Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf brctl stp brq30861ce7-0a off execute

No disable_ipv6 call.

Let me send out a patch, hopefully you can give it a try.

This issue was fixed in the openstack/os-vif 1.2.0 release.

Any progress with https://review.openstack.org/313070
I think this still affects linuxbridge users if I'm not mistaken?

Change abandoned by Michael Still (<email address hidden>) on branch: master
Review: https://review.openstack.org/313070
Reason: This patch has been sitting unchanged for more than 12 weeks. I am therefore going to abandon it to keep the review queue sane. Please feel free to restore the change if you're still working on it.

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers

Bug attachments