2017-11-24 11:16:18 |
Paul Peereboom |
bug |
|
|
added bug |
2017-11-24 11:20:08 |
Paul Peereboom |
bug |
|
|
added subscriber Gerhard Muntingh |
2017-11-24 13:18:44 |
Tristan Cacqueray |
bug task added |
|
ossa |
|
2017-11-24 13:18:51 |
Tristan Cacqueray |
ossa: status |
New |
Incomplete |
|
2017-11-24 13:19:11 |
Tristan Cacqueray |
description |
Eavesdropping private traffic
=============================
Abstract
--------
We've discovered a security issue that allows end users within their own private network to receive from, and send traffic to, other private networks on the same compute node.
Description
-----------
During live-migration there is a small time window where the ports of instances are untagged. Instances have a port trunked to the integration bridge and receive 802.1Q tagged private traffic from other tenants.
If the port is administratively down during live migration, the port will remain in trunk mode indefinitely.
Traffic is possible between ports is that are administratively down, even between tenants self-service networks.
Conditions
----------
The following conditions are necessary.
* Openvswitch Self-service networks
* An Openstack administrator or an automated process needs to schedule a Live migration
We tested this on newton.
Issues
------
This outcome is the result of multiple independent issues. We will list the most important first, and follow with bugs that create a fragile situation.
Issue #1 Initially creating a trunk port
When the port is initially created, it is in trunk mode. This creates a fail-open situation.
See: https://github.com/openstack/os-vif/blob/newton-eol/vif_plug_ovs/linux_net.py#L52
Recommendation: create ports in the port_dead state, don't leave it dangling in trunk-mode. Add a drop-flow initially.
Issue #2 Order of creation.
The instance is actually migrated before the (networking) configuration is completed.
Recommendation: wait with finishing the live migration until the underlying configuration has been applied completely.
Issue #3 Not closing the port when it is down.
Neutron calls the port_dead function to ensure the port is down. It sets the tag to 4095 and adds a "drop" flow if (and only if) there is already another tag on the port. The port_dead function will keep untagged ports untagged.
https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995
Recommendation: Make port_dead also shut the port if no tag is found. Log a warning if this happens.
Issue #4 Putting the port administratively down actually puts the port on a compute node shared vlan
Instances from different projects on different private networks can talk to each other if they put their ports down. The code does install an openflow "drop" rule but it has a lower priority (2) than the allow rules.
Recommendation: Increase the port_dead openflow drop rule priority to MAX
Timeline
--------
2017-09-14 Discovery eavesdropping issue
2017-09-15 Verify workaround.
2017-10-04 Discovery port-down-traffic issue
2017-11-24 Vendor Disclosure to Openstack
Steps to reproduce
------------------
1. Attach an instance to two networks:
admin$ openstack server create --nic net-id=<net-uuid1> --nic net-id=<net-uuid2> --image <image_id> --flavor <flavor_id> instance_temp
2. Attach a FIP to the instance to be able to log in to this instance
3. Verify:
admin$ openstack server show -c name -c addresses fe28a2ee-098f-4425-9d3c-8e2cd383547d
+-----------+-----------------------------------------------------------------------------+
| Field | Value |
+-----------+-----------------------------------------------------------------------------+
| addresses | network1=192.168.99.8, <FIP>; network2=192.168.80.14 |
| name | instance_temp |
+-----------+-----------------------------------------------------------------------------+
4. Ssh to the instance using network1 and run a tcpdump on the other port network2
[root@instance_temp]$ tcpdump -eeenni eth1
5. Get port-id of network2
admin$ nova interface-list fe28a2ee-098f-4425-9d3c-8e2cd383547d
+------------+--------------------------------------+--------------------------------------+---------------+-------------------+
| Port State | Port ID | Net ID | IP addresses | MAC Addr |
+------------+--------------------------------------+--------------------------------------+---------------+-------------------+
| ACTIVE | a848520b-0814-4030-bb48-49e4b5cf8160 | d69028f7-9558-4f14-8ce6-29cb8f1c19cd | 192.168.80.14 | fa:16:3e:2d:8b:7b |
| ACTIVE | fad148ca-cf7a-4839-aac3-a2cd8d1d2260 | d22c22ae-0a42-4e3b-8144-f28534c3439a | 192.168.99.8 | fa:16:3e:60:2c:fa |
+------------+--------------------------------------+--------------------------------------+---------------+-------------------+
6. Force port down on network 2
admin$ neutron port-update a848520b-0814-4030-bb48-49e4b5cf8160 --admin-state-up False
7. Port gets tagged with vlan 4095, the dead vlan tag, which is normal:
compute1# grep a848520b-0814-4030-bb48-49e4b5cf8160 /var/log/neutron/neutron-openvswitch-agent.log | tail -1
INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-e008feb3-8a35-4c97-adac-b48ff88165b2 - - - - -] VIF port: a848520b-0814-4030-bb48-49e4b5cf8160 admin state up disabled, putting on the dead VLAN
8. Verify the port is tagged with vlan 4095
compute1# ovs-vsctl show | grep -A3 qvoa848520b-08
Port "qvoa848520b-08"
tag: 4095
Interface "qvoa848520b-08"
9. Now live-migrate the instance:
admin# nova live-migration fe28a2ee-098f-4425-9d3c-8e2cd383547d
10. Verify the tag is gone on compute2, and take a deep breath
compute2# ovs-vsctl show | grep -A3 qvoa848520b-08
Port "qvoa848520b-08"
Interface "qvoa848520b-08"
Port...
compute2# echo "Wut!"
11. Now traffic of all other self-service networks present on compute2 can be sniffed from instance_temp
[root@instance_temp] tcpdump -eenni eth1
13:14:31.748266 fa:16:3e:6a:17:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.152, length 28
13:14:31.804573 fa:16:3e:e8:a2:d2 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.70, length 28
13:14:31.810482 fa:16:3e:95:ca:3a > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.154, length 28
13:14:31.977820 fa:16:3e:6f:f4:9b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.150, length 28
13:14:31.979590 fa:16:3e:0f:3d:cc > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 9, p 0, ethertype ARP, Request who-has 10.103.9.163 tell 10.103.9.1, length 28
13:14:32.048082 fa:16:3e:65:64:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.101, length 28
13:14:32.127400 fa:16:3e:30:cb:b5 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.165, length 28
13:14:32.141982 fa:16:3e:96:cd:b0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.100, length 28
13:14:32.205327 fa:16:3e:a2:0b:76 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.153, length 28
13:14:32.444142 fa:16:3e:1f:db:ed > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 72, p 0, ethertype IPv4, 192.168.99.212 > 224.0.0.18: VRRPv2, Advertisement, vrid 50, prio 103, authtype none, intvl 1s, length 20
13:14:32.449497 fa:16:3e:1c:24:c0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.20, length 28
13:14:32.476015 fa:16:3e:f2:3b:97 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.22, length 28
13:14:32.575034 fa:16:3e:44:fe:35 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.163, length 28
13:14:32.676185 fa:16:3e:1e:92:d7 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.150, length 28
13:14:32.711755 fa:16:3e:99:6c:c8 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 62: vlan 10, p 0, ethertype IPv4, 10.103.12.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 2, prio 49, authtype simple, intvl 1s, length 24
13:14:32.711773 fa:16:3e:f5:23:d5 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 12, p 0, ethertype IPv4, 10.103.15.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 49, authtype simple, intvl 1s, length 20
Workaround
----------
We temporary fixed this issue by forcing the dead vlan tag on port creation on compute nodes:
/usr/lib/python2.7/site-packages/vif_plug_ovs/linux_net.py:
def _create_ovs_vif_cmd(bridge, dev, iface_id, mac,
instance_id, interface_type=None,
vhost_server_path=None):
+ # ODCN: initialize port as dead
+ # ODCN: TODO: set drop flow
cmd = ['--', '--if-exists', 'del-port', dev, '--',
'add-port', bridge, dev,
+ 'tag=4095',
'--', 'set', 'Interface', dev,
'external-ids:iface-id=%s' % iface_id,
'external-ids:iface-status=active',
'external-ids:attached-mac=%s' % mac,
'external-ids:vm-uuid=%s' % instance_id]
if interface_type:
cmd += ['type=%s' % interface_type]
if vhost_server_path:
cmd += ['options:vhost-server-path=%s' % vhost_server_path]
return cmd
https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995
def port_dead(self, port, log_errors=True):
'''Once a port has no binding, put it on the "dead vlan".
:param port: an ovs_lib.VifPort object.
'''
# Don't kill a port if it's already dead
cur_tag = self.int_br.db_get_val("Port", port.port_name, "tag",
log_errors=log_errors)
+ # ODCN GM 20170915
+ if not cur_tag:
+ LOG.error('port_dead(): port %s has no tag', port.port_name)
+ # ODCN AJS 20170915
+ if not cur_tag or cur_tag != constants.DEAD_VLAN_TAG:
- if cur_tag and cur_tag != constants.DEAD_VLAN_TAG:
LOG.info('port_dead(): put port %s on dead vlan', port.port_name)
self.int_br.set_db_attribute("Port", port.port_name, "tag",
constants.DEAD_VLAN_TAG,
log_errors=log_errors)
self.int_br.drop_port(in_port=port.ofport)
plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/ovs_bridge.py
def drop_port(self, in_port):
+ # ODCN AJS 20171004:
- self.install_drop(priority=2, in_port=in_port)
+ self.install_drop(priority=65535, in_port=in_port)
Regards,
ODC Noord.
Gerhard Muntingh
Albert Siersema
Paul Peereboom |
This issue is being treated as a potential security risk under embargo. Please do not make any public mention of embargoed (private) security vulnerabilities before their coordinated publication by the OpenStack Vulnerability Management Team in the form of an official OpenStack Security Advisory. This includes discussion of the bug or associated fixes in public forums such as mailing lists, code review systems and bug trackers. Please also avoid private disclosure to other individuals not already approved for access to this information, and provide this same reminder to those who are made aware of the issue prior to publication. All discussion should remain confined to this private bug report, and any proposed fixes should be added to the bug as attachments.
--
Eavesdropping private traffic
=============================
Abstract
--------
We've discovered a security issue that allows end users within their own private network to receive from, and send traffic to, other private networks on the same compute node.
Description
-----------
During live-migration there is a small time window where the ports of instances are untagged. Instances have a port trunked to the integration bridge and receive 802.1Q tagged private traffic from other tenants.
If the port is administratively down during live migration, the port will remain in trunk mode indefinitely.
Traffic is possible between ports is that are administratively down, even between tenants self-service networks.
Conditions
----------
The following conditions are necessary.
* Openvswitch Self-service networks
* An Openstack administrator or an automated process needs to schedule a Live migration
We tested this on newton.
Issues
------
This outcome is the result of multiple independent issues. We will list the most important first, and follow with bugs that create a fragile situation.
Issue #1 Initially creating a trunk port
When the port is initially created, it is in trunk mode. This creates a fail-open situation.
See: https://github.com/openstack/os-vif/blob/newton-eol/vif_plug_ovs/linux_net.py#L52
Recommendation: create ports in the port_dead state, don't leave it dangling in trunk-mode. Add a drop-flow initially.
Issue #2 Order of creation.
The instance is actually migrated before the (networking) configuration is completed.
Recommendation: wait with finishing the live migration until the underlying configuration has been applied completely.
Issue #3 Not closing the port when it is down.
Neutron calls the port_dead function to ensure the port is down. It sets the tag to 4095 and adds a "drop" flow if (and only if) there is already another tag on the port. The port_dead function will keep untagged ports untagged.
https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995
Recommendation: Make port_dead also shut the port if no tag is found. Log a warning if this happens.
Issue #4 Putting the port administratively down actually puts the port on a compute node shared vlan
Instances from different projects on different private networks can talk to each other if they put their ports down. The code does install an openflow "drop" rule but it has a lower priority (2) than the allow rules.
Recommendation: Increase the port_dead openflow drop rule priority to MAX
Timeline
--------
2017-09-14 Discovery eavesdropping issue
2017-09-15 Verify workaround.
2017-10-04 Discovery port-down-traffic issue
2017-11-24 Vendor Disclosure to Openstack
Steps to reproduce
------------------
1. Attach an instance to two networks:
admin$ openstack server create --nic net-id=<net-uuid1> --nic net-id=<net-uuid2> --image <image_id> --flavor <flavor_id> instance_temp
2. Attach a FIP to the instance to be able to log in to this instance
3. Verify:
admin$ openstack server show -c name -c addresses fe28a2ee-098f-4425-9d3c-8e2cd383547d
+-----------+-----------------------------------------------------------------------------+
| Field | Value |
+-----------+-----------------------------------------------------------------------------+
| addresses | network1=192.168.99.8, <FIP>; network2=192.168.80.14 |
| name | instance_temp |
+-----------+-----------------------------------------------------------------------------+
4. Ssh to the instance using network1 and run a tcpdump on the other port network2
[root@instance_temp]$ tcpdump -eeenni eth1
5. Get port-id of network2
admin$ nova interface-list fe28a2ee-098f-4425-9d3c-8e2cd383547d
+------------+--------------------------------------+--------------------------------------+---------------+-------------------+
| Port State | Port ID | Net ID | IP addresses | MAC Addr |
+------------+--------------------------------------+--------------------------------------+---------------+-------------------+
| ACTIVE | a848520b-0814-4030-bb48-49e4b5cf8160 | d69028f7-9558-4f14-8ce6-29cb8f1c19cd | 192.168.80.14 | fa:16:3e:2d:8b:7b |
| ACTIVE | fad148ca-cf7a-4839-aac3-a2cd8d1d2260 | d22c22ae-0a42-4e3b-8144-f28534c3439a | 192.168.99.8 | fa:16:3e:60:2c:fa |
+------------+--------------------------------------+--------------------------------------+---------------+-------------------+
6. Force port down on network 2
admin$ neutron port-update a848520b-0814-4030-bb48-49e4b5cf8160 --admin-state-up False
7. Port gets tagged with vlan 4095, the dead vlan tag, which is normal:
compute1# grep a848520b-0814-4030-bb48-49e4b5cf8160 /var/log/neutron/neutron-openvswitch-agent.log | tail -1
INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-e008feb3-8a35-4c97-adac-b48ff88165b2 - - - - -] VIF port: a848520b-0814-4030-bb48-49e4b5cf8160 admin state up disabled, putting on the dead VLAN
8. Verify the port is tagged with vlan 4095
compute1# ovs-vsctl show | grep -A3 qvoa848520b-08
Port "qvoa848520b-08"
tag: 4095
Interface "qvoa848520b-08"
9. Now live-migrate the instance:
admin# nova live-migration fe28a2ee-098f-4425-9d3c-8e2cd383547d
10. Verify the tag is gone on compute2, and take a deep breath
compute2# ovs-vsctl show | grep -A3 qvoa848520b-08
Port "qvoa848520b-08"
Interface "qvoa848520b-08"
Port...
compute2# echo "Wut!"
11. Now traffic of all other self-service networks present on compute2 can be sniffed from instance_temp
[root@instance_temp] tcpdump -eenni eth1
13:14:31.748266 fa:16:3e:6a:17:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.152, length 28
13:14:31.804573 fa:16:3e:e8:a2:d2 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.70, length 28
13:14:31.810482 fa:16:3e:95:ca:3a > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.154, length 28
13:14:31.977820 fa:16:3e:6f:f4:9b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.150, length 28
13:14:31.979590 fa:16:3e:0f:3d:cc > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 9, p 0, ethertype ARP, Request who-has 10.103.9.163 tell 10.103.9.1, length 28
13:14:32.048082 fa:16:3e:65:64:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.101, length 28
13:14:32.127400 fa:16:3e:30:cb:b5 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.165, length 28
13:14:32.141982 fa:16:3e:96:cd:b0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.100, length 28
13:14:32.205327 fa:16:3e:a2:0b:76 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.153, length 28
13:14:32.444142 fa:16:3e:1f:db:ed > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 72, p 0, ethertype IPv4, 192.168.99.212 > 224.0.0.18: VRRPv2, Advertisement, vrid 50, prio 103, authtype none, intvl 1s, length 20
13:14:32.449497 fa:16:3e:1c:24:c0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.20, length 28
13:14:32.476015 fa:16:3e:f2:3b:97 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.22, length 28
13:14:32.575034 fa:16:3e:44:fe:35 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.163, length 28
13:14:32.676185 fa:16:3e:1e:92:d7 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.150, length 28
13:14:32.711755 fa:16:3e:99:6c:c8 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 62: vlan 10, p 0, ethertype IPv4, 10.103.12.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 2, prio 49, authtype simple, intvl 1s, length 24
13:14:32.711773 fa:16:3e:f5:23:d5 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 12, p 0, ethertype IPv4, 10.103.15.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 49, authtype simple, intvl 1s, length 20
Workaround
----------
We temporary fixed this issue by forcing the dead vlan tag on port creation on compute nodes:
/usr/lib/python2.7/site-packages/vif_plug_ovs/linux_net.py:
def _create_ovs_vif_cmd(bridge, dev, iface_id, mac,
instance_id, interface_type=None,
vhost_server_path=None):
+ # ODCN: initialize port as dead
+ # ODCN: TODO: set drop flow
cmd = ['--', '--if-exists', 'del-port', dev, '--',
'add-port', bridge, dev,
+ 'tag=4095',
'--', 'set', 'Interface', dev,
'external-ids:iface-id=%s' % iface_id,
'external-ids:iface-status=active',
'external-ids:attached-mac=%s' % mac,
'external-ids:vm-uuid=%s' % instance_id]
if interface_type:
cmd += ['type=%s' % interface_type]
if vhost_server_path:
cmd += ['options:vhost-server-path=%s' % vhost_server_path]
return cmd
https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995
def port_dead(self, port, log_errors=True):
'''Once a port has no binding, put it on the "dead vlan".
:param port: an ovs_lib.VifPort object.
'''
# Don't kill a port if it's already dead
cur_tag = self.int_br.db_get_val("Port", port.port_name, "tag",
log_errors=log_errors)
+ # ODCN GM 20170915
+ if not cur_tag:
+ LOG.error('port_dead(): port %s has no tag', port.port_name)
+ # ODCN AJS 20170915
+ if not cur_tag or cur_tag != constants.DEAD_VLAN_TAG:
- if cur_tag and cur_tag != constants.DEAD_VLAN_TAG:
LOG.info('port_dead(): put port %s on dead vlan', port.port_name)
self.int_br.set_db_attribute("Port", port.port_name, "tag",
constants.DEAD_VLAN_TAG,
log_errors=log_errors)
self.int_br.drop_port(in_port=port.ofport)
plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/ovs_bridge.py
def drop_port(self, in_port):
+ # ODCN AJS 20171004:
- self.install_drop(priority=2, in_port=in_port)
+ self.install_drop(priority=65535, in_port=in_port)
Regards,
ODC Noord.
Gerhard Muntingh
Albert Siersema
Paul Peereboom |
|
2017-11-24 13:19:33 |
Tristan Cacqueray |
bug |
|
|
added subscriber Neutron Core Security reviewers |
2017-11-28 05:37:33 |
Tristan Cacqueray |
bug task added |
|
nova |
|
2017-11-28 05:37:46 |
Tristan Cacqueray |
bug |
|
|
added subscriber Nova Core security contacts |
2017-11-29 23:02:05 |
Armando Migliaccio |
bug |
|
|
added subscriber Ihar Hrachyshka |
2017-12-02 06:02:56 |
Armando Migliaccio |
neutron: status |
New |
Triaged |
|
2017-12-02 06:03:15 |
Armando Migliaccio |
nova: status |
New |
Confirmed |
|
2017-12-02 06:03:21 |
Armando Migliaccio |
neutron: importance |
Undecided |
Low |
|
2017-12-05 22:53:51 |
Ihar Hrachyshka |
bug task added |
|
os-vif |
|
2017-12-07 18:51:48 |
Ihar Hrachyshka |
bug |
|
|
added subscriber Miguel Angel Ajo |
2017-12-11 09:39:14 |
Miguel Angel Ajo |
bug |
|
|
added subscriber Numan Siddique |
2017-12-11 11:18:42 |
Paul Peereboom |
bug |
|
|
added subscriber loonatic |
2018-01-05 20:32:39 |
Ihar Hrachyshka |
attachment added |
|
0001-Mark-all-dead-ports-with-dead-tag.patch https://bugs.launchpad.net/neutron/+bug/1734320/+attachment/5031634/+files/0001-Mark-all-dead-ports-with-dead-tag.patch |
|
2018-01-09 21:42:32 |
Ihar Hrachyshka |
attachment added |
|
os-vif tagging new ovs port with dead tag https://bugs.launchpad.net/neutron/+bug/1734320/+attachment/5033809/+files/0001-Create-new-ovs-ports-with-dead-vlan-tag.patch |
|
2018-01-25 17:32:35 |
Ihar Hrachyshka |
neutron: importance |
Low |
High |
|
2018-02-01 14:28:13 |
Matt Riedemann |
bug |
|
|
added subscriber sean mooney |
2018-02-01 14:28:19 |
Matt Riedemann |
bug |
|
|
added subscriber Jay Pipes |
2018-02-01 14:28:23 |
Matt Riedemann |
bug |
|
|
added subscriber Dan Smith |
2018-04-27 15:08:42 |
Miguel Angel Ajo |
bug |
|
|
added subscriber Jakub Libosvar |
2018-04-27 15:08:50 |
Miguel Angel Ajo |
bug |
|
|
added subscriber Daniel Alvarez |
2018-04-27 16:41:02 |
Jeremy Stanley |
description |
This issue is being treated as a potential security risk under embargo. Please do not make any public mention of embargoed (private) security vulnerabilities before their coordinated publication by the OpenStack Vulnerability Management Team in the form of an official OpenStack Security Advisory. This includes discussion of the bug or associated fixes in public forums such as mailing lists, code review systems and bug trackers. Please also avoid private disclosure to other individuals not already approved for access to this information, and provide this same reminder to those who are made aware of the issue prior to publication. All discussion should remain confined to this private bug report, and any proposed fixes should be added to the bug as attachments.
--
Eavesdropping private traffic
=============================
Abstract
--------
We've discovered a security issue that allows end users within their own private network to receive from, and send traffic to, other private networks on the same compute node.
Description
-----------
During live-migration there is a small time window where the ports of instances are untagged. Instances have a port trunked to the integration bridge and receive 802.1Q tagged private traffic from other tenants.
If the port is administratively down during live migration, the port will remain in trunk mode indefinitely.
Traffic is possible between ports is that are administratively down, even between tenants self-service networks.
Conditions
----------
The following conditions are necessary.
* Openvswitch Self-service networks
* An Openstack administrator or an automated process needs to schedule a Live migration
We tested this on newton.
Issues
------
This outcome is the result of multiple independent issues. We will list the most important first, and follow with bugs that create a fragile situation.
Issue #1 Initially creating a trunk port
When the port is initially created, it is in trunk mode. This creates a fail-open situation.
See: https://github.com/openstack/os-vif/blob/newton-eol/vif_plug_ovs/linux_net.py#L52
Recommendation: create ports in the port_dead state, don't leave it dangling in trunk-mode. Add a drop-flow initially.
Issue #2 Order of creation.
The instance is actually migrated before the (networking) configuration is completed.
Recommendation: wait with finishing the live migration until the underlying configuration has been applied completely.
Issue #3 Not closing the port when it is down.
Neutron calls the port_dead function to ensure the port is down. It sets the tag to 4095 and adds a "drop" flow if (and only if) there is already another tag on the port. The port_dead function will keep untagged ports untagged.
https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995
Recommendation: Make port_dead also shut the port if no tag is found. Log a warning if this happens.
Issue #4 Putting the port administratively down actually puts the port on a compute node shared vlan
Instances from different projects on different private networks can talk to each other if they put their ports down. The code does install an openflow "drop" rule but it has a lower priority (2) than the allow rules.
Recommendation: Increase the port_dead openflow drop rule priority to MAX
Timeline
--------
2017-09-14 Discovery eavesdropping issue
2017-09-15 Verify workaround.
2017-10-04 Discovery port-down-traffic issue
2017-11-24 Vendor Disclosure to Openstack
Steps to reproduce
------------------
1. Attach an instance to two networks:
admin$ openstack server create --nic net-id=<net-uuid1> --nic net-id=<net-uuid2> --image <image_id> --flavor <flavor_id> instance_temp
2. Attach a FIP to the instance to be able to log in to this instance
3. Verify:
admin$ openstack server show -c name -c addresses fe28a2ee-098f-4425-9d3c-8e2cd383547d
+-----------+-----------------------------------------------------------------------------+
| Field | Value |
+-----------+-----------------------------------------------------------------------------+
| addresses | network1=192.168.99.8, <FIP>; network2=192.168.80.14 |
| name | instance_temp |
+-----------+-----------------------------------------------------------------------------+
4. Ssh to the instance using network1 and run a tcpdump on the other port network2
[root@instance_temp]$ tcpdump -eeenni eth1
5. Get port-id of network2
admin$ nova interface-list fe28a2ee-098f-4425-9d3c-8e2cd383547d
+------------+--------------------------------------+--------------------------------------+---------------+-------------------+
| Port State | Port ID | Net ID | IP addresses | MAC Addr |
+------------+--------------------------------------+--------------------------------------+---------------+-------------------+
| ACTIVE | a848520b-0814-4030-bb48-49e4b5cf8160 | d69028f7-9558-4f14-8ce6-29cb8f1c19cd | 192.168.80.14 | fa:16:3e:2d:8b:7b |
| ACTIVE | fad148ca-cf7a-4839-aac3-a2cd8d1d2260 | d22c22ae-0a42-4e3b-8144-f28534c3439a | 192.168.99.8 | fa:16:3e:60:2c:fa |
+------------+--------------------------------------+--------------------------------------+---------------+-------------------+
6. Force port down on network 2
admin$ neutron port-update a848520b-0814-4030-bb48-49e4b5cf8160 --admin-state-up False
7. Port gets tagged with vlan 4095, the dead vlan tag, which is normal:
compute1# grep a848520b-0814-4030-bb48-49e4b5cf8160 /var/log/neutron/neutron-openvswitch-agent.log | tail -1
INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-e008feb3-8a35-4c97-adac-b48ff88165b2 - - - - -] VIF port: a848520b-0814-4030-bb48-49e4b5cf8160 admin state up disabled, putting on the dead VLAN
8. Verify the port is tagged with vlan 4095
compute1# ovs-vsctl show | grep -A3 qvoa848520b-08
Port "qvoa848520b-08"
tag: 4095
Interface "qvoa848520b-08"
9. Now live-migrate the instance:
admin# nova live-migration fe28a2ee-098f-4425-9d3c-8e2cd383547d
10. Verify the tag is gone on compute2, and take a deep breath
compute2# ovs-vsctl show | grep -A3 qvoa848520b-08
Port "qvoa848520b-08"
Interface "qvoa848520b-08"
Port...
compute2# echo "Wut!"
11. Now traffic of all other self-service networks present on compute2 can be sniffed from instance_temp
[root@instance_temp] tcpdump -eenni eth1
13:14:31.748266 fa:16:3e:6a:17:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.152, length 28
13:14:31.804573 fa:16:3e:e8:a2:d2 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.70, length 28
13:14:31.810482 fa:16:3e:95:ca:3a > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.154, length 28
13:14:31.977820 fa:16:3e:6f:f4:9b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.150, length 28
13:14:31.979590 fa:16:3e:0f:3d:cc > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 9, p 0, ethertype ARP, Request who-has 10.103.9.163 tell 10.103.9.1, length 28
13:14:32.048082 fa:16:3e:65:64:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.101, length 28
13:14:32.127400 fa:16:3e:30:cb:b5 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.165, length 28
13:14:32.141982 fa:16:3e:96:cd:b0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.100, length 28
13:14:32.205327 fa:16:3e:a2:0b:76 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.153, length 28
13:14:32.444142 fa:16:3e:1f:db:ed > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 72, p 0, ethertype IPv4, 192.168.99.212 > 224.0.0.18: VRRPv2, Advertisement, vrid 50, prio 103, authtype none, intvl 1s, length 20
13:14:32.449497 fa:16:3e:1c:24:c0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.20, length 28
13:14:32.476015 fa:16:3e:f2:3b:97 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.22, length 28
13:14:32.575034 fa:16:3e:44:fe:35 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.163, length 28
13:14:32.676185 fa:16:3e:1e:92:d7 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.150, length 28
13:14:32.711755 fa:16:3e:99:6c:c8 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 62: vlan 10, p 0, ethertype IPv4, 10.103.12.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 2, prio 49, authtype simple, intvl 1s, length 24
13:14:32.711773 fa:16:3e:f5:23:d5 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 12, p 0, ethertype IPv4, 10.103.15.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 49, authtype simple, intvl 1s, length 20
Workaround
----------
We temporary fixed this issue by forcing the dead vlan tag on port creation on compute nodes:
/usr/lib/python2.7/site-packages/vif_plug_ovs/linux_net.py:
def _create_ovs_vif_cmd(bridge, dev, iface_id, mac,
instance_id, interface_type=None,
vhost_server_path=None):
+ # ODCN: initialize port as dead
+ # ODCN: TODO: set drop flow
cmd = ['--', '--if-exists', 'del-port', dev, '--',
'add-port', bridge, dev,
+ 'tag=4095',
'--', 'set', 'Interface', dev,
'external-ids:iface-id=%s' % iface_id,
'external-ids:iface-status=active',
'external-ids:attached-mac=%s' % mac,
'external-ids:vm-uuid=%s' % instance_id]
if interface_type:
cmd += ['type=%s' % interface_type]
if vhost_server_path:
cmd += ['options:vhost-server-path=%s' % vhost_server_path]
return cmd
https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995
def port_dead(self, port, log_errors=True):
'''Once a port has no binding, put it on the "dead vlan".
:param port: an ovs_lib.VifPort object.
'''
# Don't kill a port if it's already dead
cur_tag = self.int_br.db_get_val("Port", port.port_name, "tag",
log_errors=log_errors)
+ # ODCN GM 20170915
+ if not cur_tag:
+ LOG.error('port_dead(): port %s has no tag', port.port_name)
+ # ODCN AJS 20170915
+ if not cur_tag or cur_tag != constants.DEAD_VLAN_TAG:
- if cur_tag and cur_tag != constants.DEAD_VLAN_TAG:
LOG.info('port_dead(): put port %s on dead vlan', port.port_name)
self.int_br.set_db_attribute("Port", port.port_name, "tag",
constants.DEAD_VLAN_TAG,
log_errors=log_errors)
self.int_br.drop_port(in_port=port.ofport)
plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/ovs_bridge.py
def drop_port(self, in_port):
+ # ODCN AJS 20171004:
- self.install_drop(priority=2, in_port=in_port)
+ self.install_drop(priority=65535, in_port=in_port)
Regards,
ODC Noord.
Gerhard Muntingh
Albert Siersema
Paul Peereboom |
Eavesdropping private traffic
=============================
Abstract
--------
We've discovered a security issue that allows end users within their own private network to receive from, and send traffic to, other private networks on the same compute node.
Description
-----------
During live-migration there is a small time window where the ports of instances are untagged. Instances have a port trunked to the integration bridge and receive 802.1Q tagged private traffic from other tenants.
If the port is administratively down during live migration, the port will remain in trunk mode indefinitely.
Traffic is possible between ports is that are administratively down, even between tenants self-service networks.
Conditions
----------
The following conditions are necessary.
* Openvswitch Self-service networks
* An Openstack administrator or an automated process needs to schedule a Live migration
We tested this on newton.
Issues
------
This outcome is the result of multiple independent issues. We will list the most important first, and follow with bugs that create a fragile situation.
Issue #1 Initially creating a trunk port
When the port is initially created, it is in trunk mode. This creates a fail-open situation.
See: https://github.com/openstack/os-vif/blob/newton-eol/vif_plug_ovs/linux_net.py#L52
Recommendation: create ports in the port_dead state, don't leave it dangling in trunk-mode. Add a drop-flow initially.
Issue #2 Order of creation.
The instance is actually migrated before the (networking) configuration is completed.
Recommendation: wait with finishing the live migration until the underlying configuration has been applied completely.
Issue #3 Not closing the port when it is down.
Neutron calls the port_dead function to ensure the port is down. It sets the tag to 4095 and adds a "drop" flow if (and only if) there is already another tag on the port. The port_dead function will keep untagged ports untagged.
https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995
Recommendation: Make port_dead also shut the port if no tag is found. Log a warning if this happens.
Issue #4 Putting the port administratively down actually puts the port on a compute node shared vlan
Instances from different projects on different private networks can talk to each other if they put their ports down. The code does install an openflow "drop" rule but it has a lower priority (2) than the allow rules.
Recommendation: Increase the port_dead openflow drop rule priority to MAX
Timeline
--------
2017-09-14 Discovery eavesdropping issue
2017-09-15 Verify workaround.
2017-10-04 Discovery port-down-traffic issue
2017-11-24 Vendor Disclosure to Openstack
Steps to reproduce
------------------
1. Attach an instance to two networks:
admin$ openstack server create --nic net-id=<net-uuid1> --nic net-id=<net-uuid2> --image <image_id> --flavor <flavor_id> instance_temp
2. Attach a FIP to the instance to be able to log in to this instance
3. Verify:
admin$ openstack server show -c name -c addresses fe28a2ee-098f-4425-9d3c-8e2cd383547d
+-----------+-----------------------------------------------------------------------------+
| Field | Value |
+-----------+-----------------------------------------------------------------------------+
| addresses | network1=192.168.99.8, <FIP>; network2=192.168.80.14 |
| name | instance_temp |
+-----------+-----------------------------------------------------------------------------+
4. Ssh to the instance using network1 and run a tcpdump on the other port network2
[root@instance_temp]$ tcpdump -eeenni eth1
5. Get port-id of network2
admin$ nova interface-list fe28a2ee-098f-4425-9d3c-8e2cd383547d
+------------+--------------------------------------+--------------------------------------+---------------+-------------------+
| Port State | Port ID | Net ID | IP addresses | MAC Addr |
+------------+--------------------------------------+--------------------------------------+---------------+-------------------+
| ACTIVE | a848520b-0814-4030-bb48-49e4b5cf8160 | d69028f7-9558-4f14-8ce6-29cb8f1c19cd | 192.168.80.14 | fa:16:3e:2d:8b:7b |
| ACTIVE | fad148ca-cf7a-4839-aac3-a2cd8d1d2260 | d22c22ae-0a42-4e3b-8144-f28534c3439a | 192.168.99.8 | fa:16:3e:60:2c:fa |
+------------+--------------------------------------+--------------------------------------+---------------+-------------------+
6. Force port down on network 2
admin$ neutron port-update a848520b-0814-4030-bb48-49e4b5cf8160 --admin-state-up False
7. Port gets tagged with vlan 4095, the dead vlan tag, which is normal:
compute1# grep a848520b-0814-4030-bb48-49e4b5cf8160 /var/log/neutron/neutron-openvswitch-agent.log | tail -1
INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-e008feb3-8a35-4c97-adac-b48ff88165b2 - - - - -] VIF port: a848520b-0814-4030-bb48-49e4b5cf8160 admin state up disabled, putting on the dead VLAN
8. Verify the port is tagged with vlan 4095
compute1# ovs-vsctl show | grep -A3 qvoa848520b-08
Port "qvoa848520b-08"
tag: 4095
Interface "qvoa848520b-08"
9. Now live-migrate the instance:
admin# nova live-migration fe28a2ee-098f-4425-9d3c-8e2cd383547d
10. Verify the tag is gone on compute2, and take a deep breath
compute2# ovs-vsctl show | grep -A3 qvoa848520b-08
Port "qvoa848520b-08"
Interface "qvoa848520b-08"
Port...
compute2# echo "Wut!"
11. Now traffic of all other self-service networks present on compute2 can be sniffed from instance_temp
[root@instance_temp] tcpdump -eenni eth1
13:14:31.748266 fa:16:3e:6a:17:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.152, length 28
13:14:31.804573 fa:16:3e:e8:a2:d2 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.70, length 28
13:14:31.810482 fa:16:3e:95:ca:3a > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.154, length 28
13:14:31.977820 fa:16:3e:6f:f4:9b > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.150, length 28
13:14:31.979590 fa:16:3e:0f:3d:cc > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 9, p 0, ethertype ARP, Request who-has 10.103.9.163 tell 10.103.9.1, length 28
13:14:32.048082 fa:16:3e:65:64:38 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.101, length 28
13:14:32.127400 fa:16:3e:30:cb:b5 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.165, length 28
13:14:32.141982 fa:16:3e:96:cd:b0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.100, length 28
13:14:32.205327 fa:16:3e:a2:0b:76 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.153, length 28
13:14:32.444142 fa:16:3e:1f:db:ed > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 72, p 0, ethertype IPv4, 192.168.99.212 > 224.0.0.18: VRRPv2, Advertisement, vrid 50, prio 103, authtype none, intvl 1s, length 20
13:14:32.449497 fa:16:3e:1c:24:c0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.20, length 28
13:14:32.476015 fa:16:3e:f2:3b:97 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 33, p 0, ethertype ARP, Request who-has 10.0.1.9 tell 10.0.1.22, length 28
13:14:32.575034 fa:16:3e:44:fe:35 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.163, length 28
13:14:32.676185 fa:16:3e:1e:92:d7 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 10, p 0, ethertype ARP, Request who-has 10.103.12.160 tell 10.103.12.150, length 28
13:14:32.711755 fa:16:3e:99:6c:c8 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 62: vlan 10, p 0, ethertype IPv4, 10.103.12.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 2, prio 49, authtype simple, intvl 1s, length 24
13:14:32.711773 fa:16:3e:f5:23:d5 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 58: vlan 12, p 0, ethertype IPv4, 10.103.15.154 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 49, authtype simple, intvl 1s, length 20
Workaround
----------
We temporary fixed this issue by forcing the dead vlan tag on port creation on compute nodes:
/usr/lib/python2.7/site-packages/vif_plug_ovs/linux_net.py:
def _create_ovs_vif_cmd(bridge, dev, iface_id, mac,
instance_id, interface_type=None,
vhost_server_path=None):
+ # ODCN: initialize port as dead
+ # ODCN: TODO: set drop flow
cmd = ['--', '--if-exists', 'del-port', dev, '--',
'add-port', bridge, dev,
+ 'tag=4095',
'--', 'set', 'Interface', dev,
'external-ids:iface-id=%s' % iface_id,
'external-ids:iface-status=active',
'external-ids:attached-mac=%s' % mac,
'external-ids:vm-uuid=%s' % instance_id]
if interface_type:
cmd += ['type=%s' % interface_type]
if vhost_server_path:
cmd += ['options:vhost-server-path=%s' % vhost_server_path]
return cmd
https://github.com/openstack/neutron/blob/stable/newton/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L995
def port_dead(self, port, log_errors=True):
'''Once a port has no binding, put it on the "dead vlan".
:param port: an ovs_lib.VifPort object.
'''
# Don't kill a port if it's already dead
cur_tag = self.int_br.db_get_val("Port", port.port_name, "tag",
log_errors=log_errors)
+ # ODCN GM 20170915
+ if not cur_tag:
+ LOG.error('port_dead(): port %s has no tag', port.port_name)
+ # ODCN AJS 20170915
+ if not cur_tag or cur_tag != constants.DEAD_VLAN_TAG:
- if cur_tag and cur_tag != constants.DEAD_VLAN_TAG:
LOG.info('port_dead(): put port %s on dead vlan', port.port_name)
self.int_br.set_db_attribute("Port", port.port_name, "tag",
constants.DEAD_VLAN_TAG,
log_errors=log_errors)
self.int_br.drop_port(in_port=port.ofport)
plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/ovs_bridge.py
def drop_port(self, in_port):
+ # ODCN AJS 20171004:
- self.install_drop(priority=2, in_port=in_port)
+ self.install_drop(priority=65535, in_port=in_port)
Regards,
ODC Noord.
Gerhard Muntingh
Albert Siersema
Paul Peereboom |
|
2018-04-27 16:41:11 |
Jeremy Stanley |
ossa: status |
Incomplete |
Won't Fix |
|
2018-04-27 16:42:17 |
Jeremy Stanley |
information type |
Private Security |
Public |
|
2018-04-27 16:42:47 |
Jeremy Stanley |
tags |
|
security |
|
2018-04-29 23:38:50 |
Miguel Lavalle |
bug watch added |
|
https://bugzilla.redhat.com/show_bug.cgi?id=1558336 |
|
2018-05-10 13:27:02 |
Brian Haley |
bug |
|
|
added subscriber Brian Haley |
2018-08-21 10:16:40 |
OpenStack Infra |
os-vif: status |
New |
In Progress |
|
2018-08-21 10:16:40 |
OpenStack Infra |
os-vif: assignee |
|
Slawek Kaplonski (slaweq) |
|
2018-09-13 16:01:52 |
OpenStack Infra |
os-vif: assignee |
Slawek Kaplonski (slaweq) |
sean mooney (sean-k-mooney) |
|
2018-09-13 20:03:57 |
OpenStack Infra |
nova: status |
Confirmed |
In Progress |
|
2018-09-13 20:03:57 |
OpenStack Infra |
nova: assignee |
|
sean mooney (sean-k-mooney) |
|
2018-10-13 09:32:20 |
Marcus Murwall |
bug |
|
|
added subscriber Marcus Murwall |
2018-10-19 16:30:21 |
sean mooney |
cve linked |
|
2018-14636 |
|
2018-11-08 16:36:33 |
OpenStack Infra |
neutron: status |
Triaged |
In Progress |
|
2018-11-08 16:36:33 |
OpenStack Infra |
neutron: assignee |
|
sean mooney (sean-k-mooney) |
|
2018-12-10 14:42:39 |
sean mooney |
os-vif: importance |
Undecided |
High |
|
2018-12-10 14:45:42 |
sean mooney |
os-vif: status |
In Progress |
Fix Committed |
|
2019-01-04 13:55:02 |
Bernard Cafarelli |
tags |
security |
neutron-proactive-backport-potential security |
|
2019-01-04 15:12:59 |
sean mooney |
os-vif: status |
Fix Committed |
Fix Released |
|
2019-01-04 15:13:40 |
sean mooney |
neutron: status |
In Progress |
Fix Committed |
|
2019-01-04 15:14:14 |
sean mooney |
nova: status |
In Progress |
Won't Fix |
|
2019-01-16 15:39:27 |
OpenStack Infra |
nova: status |
Won't Fix |
In Progress |
|
2019-02-17 22:30:47 |
Mauro Oddi |
bug |
|
|
added subscriber Mauro Oddi |
2019-10-04 15:39:52 |
OpenStack Infra |
tags |
neutron-proactive-backport-potential security |
in-stable-rocky neutron-proactive-backport-potential security |
|
2019-10-18 14:33:53 |
OpenStack Infra |
tags |
in-stable-rocky neutron-proactive-backport-potential security |
in-stable-queens in-stable-rocky neutron-proactive-backport-potential security |
|
2019-10-23 01:39:07 |
OpenStack Infra |
tags |
in-stable-queens in-stable-rocky neutron-proactive-backport-potential security |
in-stable-pike in-stable-queens in-stable-rocky neutron-proactive-backport-potential security |
|
2020-01-14 11:34:01 |
Bernard Cafarelli |
tags |
in-stable-pike in-stable-queens in-stable-rocky neutron-proactive-backport-potential security |
in-stable-pike in-stable-queens in-stable-rocky security |
|
2020-08-03 22:44:29 |
ABDULLAH |
neutron: status |
Fix Committed |
Confirmed |
|
2020-08-03 22:44:39 |
ABDULLAH |
neutron: status |
Confirmed |
New |
|
2020-08-03 22:44:45 |
ABDULLAH |
nova: status |
In Progress |
Fix Released |
|
2020-08-03 22:45:03 |
ABDULLAH |
neutron: status |
New |
Incomplete |
|
2020-08-03 22:45:24 |
ABDULLAH |
neutron: status |
Incomplete |
Confirmed |
|
2020-08-04 12:39:47 |
OpenStack Infra |
neutron: status |
Confirmed |
In Progress |
|
2020-08-04 12:39:47 |
OpenStack Infra |
neutron: assignee |
sean mooney (sean-k-mooney) |
Rodolfo Alonso (rodolfo-alonso-hernandez) |
|
2020-09-04 12:52:17 |
Bernard Cafarelli |
tags |
in-stable-pike in-stable-queens in-stable-rocky security |
in-stable-pike in-stable-queens in-stable-rocky neutron-proactive-backport-potential security |
|
2020-10-23 01:22:40 |
OpenStack Infra |
tags |
in-stable-pike in-stable-queens in-stable-rocky neutron-proactive-backport-potential security |
in-stable-pike in-stable-queens in-stable-rocky in-stable-victoria neutron-proactive-backport-potential security |
|
2020-12-11 11:15:06 |
Slawek Kaplonski |
neutron: status |
In Progress |
Fix Released |
|
2023-04-23 21:00:18 |
OpenStack Infra |
tags |
in-stable-pike in-stable-queens in-stable-rocky in-stable-victoria neutron-proactive-backport-potential security |
in-stable-pike in-stable-queens in-stable-rocky in-stable-victoria in-stable-wallaby neutron-proactive-backport-potential security |
|