Routed ICMP v6 traffic goes through with no security group rules with DVR

Bug #1515444 reported by Ritesh Anand
14
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Security Advisory
Won't Fix
Undecided
Unassigned
neutron
Invalid
Undecided
Unassigned

Bug Description

V6 traffic flows between two dual stacked instances connected via DVR despite absence of security group rule allowing so. V4 traffic is blocked.

Build: Master Nov. 11/11/15
Setup: One controller/Network node, Two Compute nodes.

Steps:
1. create net1 and net2.
2. create IPv4 and IPv6 subnet(slaac/slaac*) on each network.
3. create DVR.
4. Add router interface for each of four subnets to DVR.
5. Boot instance with nic on net1 and other with nic on net2.
6. Delete all security group rules if any exists.
7. Ping6 v6 IP from one instance to the other.

Expected: Traffic does not go through.
Observed: Traffic goes through.

*also observed with dhcpv6-stateful addressing.

Revision history for this message
Tristan Cacqueray (tristan-cacqueray) wrote :

Since this report concerns a possible security risk, an incomplete security advisory task has been added while the core security reviewers for the affected project or projects confirm the bug and discuss the scope of any vulnerability along with potential solutions.

Changed in ossa:
status: New → Incomplete
description: updated
Revision history for this message
Sean M. Collins (scollins) wrote :

Is ICMPv6 the only protocol that is allowed through? What types of ICMP packets are allowed through? This will impact the severity

tags: added: l3-dvr-backlog
Revision history for this message
Brian Haley (brian-haley) wrote :

Ritesh,

I will try and reproduce this, but can you answer some more questions?

1. You're in a DVR setup, so what does "create DVR" mean? Do you just mean router-create? Maybe it's just best if you supply the exact commands since I'd like to know which IPv6 address you are pinging - link-local vs global.

2. Do both of these VMs wind-up on the same compute node or different?

3. What does the conntrack table on the compute node look like when this happens? 'sudo conntrack -f ipv6 -L -n'

I might have more follow-on questions later, thanks.

Revision history for this message
Ritesh Anand (ritesh-anand) wrote :

Brian,

1. You're in a DVR setup, so what does "create DVR" mean? Do you just mean router-create? Maybe it's just best if you supply the exact commands since I'd like to know which IPv6 address you are pinging - link-local vs global.
>> yes, just router-create.
neutron net-create net1
neutron net-create net2
neutron subnet-create net2 2:2::2/64 --name sub2 --enable-dhcp --ip-version 6 --ipv6-ra-mode dhcpv6-stateful --ipv6-address-mode dhcpv6-stateful
neutron subnet-create net1 1:1::1/64 --name sub1 --enable-dhcp --ip-version 6 --ipv6-ra-mode dhcpv6-stateful --ipv6-address-mode dhcpv6-stateful
neutron subnet-create net2 2.2.2.0/24 --name v4sub2 --enable-dhcp
neutron subnet-create net1 1.1.1.0/24 --name v4sub1 --enable-dhcp
neutron router-create dvr
neutron router-interface-add dvr sub1
neutron router-interface-add dvr sub2
neutron router-interface-add dvr v4sub2
neutron router-interface-add dvr v4sub1

>> Using link local address
$
$ ping6 2:2::f816:3eff:fe19:9182
PING 2:2::f816:3eff:fe19:9182 (2:2::f816:3eff:fe19:9182): 56 data bytes
64 bytes from 2:2::f816:3eff:fe19:9182: seq=0 ttl=63 time=2.759 ms
64 bytes from 2:2::f816:3eff:fe19:9182: seq=1 ttl=63 time=1.874 ms
64 bytes from 2:2::f816:3eff:fe19:9182: seq=2 ttl=63 time=2.014 ms

2. Do both of these VMs wind-up on the same compute node or different?
>> They end up on different compute nodes.

3. Could not get the conntrack table output in error state.

Sean,
I had just tried Ping6.

I restacked a few hours back and do not see the issue anymore.

Revision history for this message
Sean M. Collins (scollins) wrote : Re: [Bug 1515444] Re: Routed ICMP v6 traffic goes through with no security group rules with DVR

What is your l3_agent.ini ? You did not set the router's distributed
flag to true in your CLI command.
--
Sean M. Collins

Revision history for this message
Ritesh Anand (ritesh-anand) wrote :

Distributed flag is True by default in Q_DVR_MODE=dvr

Here is my l3_agent.ini:

[DEFAULT]
agent_mode = dvr_snat
l3_agent_manager = neutron.agent.l3_agent.L3NATAgentWithStateReport
external_network_bridge = br-ex
interface_driver = openvswitch
ovs_use_veth = False
debug = True
verbose = True

[AGENT]
root_helper_daemon = sudo /usr/local/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf
root_helper = sudo /usr/local/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

Despite several tries I have not been able to recreate this reported scenario.

Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

Are we sure we must blame something in the DVR implementation?
I think we first need to ascertain the same does not happen in non-DVR, and in any case we should probably look at the L2 agent rather than the L3 agent.
It seems indeed that the same baseline security rules (ie: block all) are not being applied in the V6 case.

Ritesh, are you only able to ping instance or are services like SSH and HTTP available as well from the other network?

Revision history for this message
Hardik Italia (hardik-italia) wrote :
Download full text (4.7 KiB)

Not able to reproduce the issue as reported with DVR router.

Removed all IPV6 related rules.

$ neutron security-group-rule-list
+--------------------------------------+----------------+-----------+-----------+---------------+-----------------+
| id | security_group | direction | ethertype | protocol/port | remote |
+--------------------------------------+----------------+-----------+-----------+---------------+-----------------+
| 32ede39a-141b-407f-8a8d-8f3b2f5ca66f | default | ingress | IPv4 | any | default (group) |
| 84b998b9-ee94-4e11-a54b-112c22b2f225 | default | egress | IPv4 | any | any |
| e89a75fe-859e-4765-a407-368484f4ce42 | default | egress | IPv4 | any | any |
| f8ad0c74-6435-481b-ac5e-a74d06b49f51 | default | ingress | IPv4 | any | default (group) |
+--------------------------------------+----------------+-----------+-----------+---------------+-----------------+

DVR router with interface in each subnet

$ neutron router-show r1
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| admin_state_up | True |
| distributed | True |
| external_gateway_info | |
| ha | False |
| id | 0646b0bf-bc04-4544-9a82-60b6a65dc891 |
| name | r1 |
| routes | |
| status | ACTIVE |
| tenant_id | aec5dca07ef143b296d9d1e43f2ec404 |
+-----------------------+--------------------------------------+

$ neutron router-port-list r1
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------------+
| id | name | mac_address | fixed_ips |
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------------+
| 033fc1a4-8a73-457e-a73c-cadaa9213f02 | | fa:16:3e:d7:b0:8e | {"subnet_id": "269a6a13-a40b-4dfe-a6ca-878a1fb53cef", "ip_address": "11.11.11.1"} |
| 2f51a071-49f8-4ffc-a637-9de1b4f4f1c8 | | fa:16:3e:bb:1c:5e | {"subnet_id": "c94e7a3a-878f-42cc-86ca-193aeaa70a03", "ip_address": "12.12.12.1"} |
| af326465-e266-4db3-8d60-cc0f02b11dcb | | fa:16:3e:19:16:47 | {"subnet_id": "a4ef5635-2cc5-4dfa-91ed-2837e6907169", "ip_address": "22:22:22::1"} |
| ca8c04ce-b40d-4db3-88b8-92f27630e967 | | fa:16:3e:24:2f:3c | {"subnet_id": "5ba08165-b111-4819-994b-8f502bec8a87", "ip_address": "11:11:11::1"} |
+--------------------------------------+------+-------------------+------------------------------------------------------------...

Read more...

Changed in neutron:
status: New → Invalid
Revision history for this message
Tristan Cacqueray (tristan-cacqueray) wrote :

Ritesh, can you try the procedure outlined in comment #8 in order to see if this still reproduces ?

Else we should probably remove the private security setting and open this bug.

Revision history for this message
Ritesh Anand (ritesh-anand) wrote :

Tristan, I could not reproduce this, kindly do the needful. Thanks!

information type: Private Security → Public
Changed in ossa:
status: Incomplete → Won't Fix
description: updated
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.