Traffic from OnPrem nodes behind GW to Cloud fails

Bug #1784957 reported by Senthilnathan Murugappan on 2018-08-01
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Juniper Openstack
Status tracked in Trunk
Fix Committed
Adam Kulagowski
Fix Committed
Adam Kulagowski

Bug Description

The default action for Chain Forward in iptables is set to Drop so the pkts from the private interface of GW to tap0 interface gets dropped. We need to add a rule to fwd pkts between tap0 and pvt_interface. Need to add rules 5 and 6 in the below output as part of provisioning.

Chain FORWARD (policy DROP 12 packets, 688 bytes)
num pkts bytes target prot opt in out source destination
1 0 0 ACCEPT all -- * docker0 ctstate RELATED,ESTABLISHED
2 0 0 DOCKER all -- * docker0
3 0 0 ACCEPT all -- docker0 !docker0
4 0 0 ACCEPT all -- docker0 docker0

5 17 1440 ACCEPT all -- tap0 bond0
6 14 1164 ACCEPT all -- bond0 tap0

Sanju Abraham (asanju) wrote :

The iptables rules that was applied on the GW to get this working was :

iptables -t filter -A FORWARD -i bond0 -o tap0 -j ACCEPT
iptables -t filter -A FORWARD -i tap0 -o bond0 -j ACCEPT

The pvt customer facing subnet was on the bond0 and the cloud facing subnet on tap0.

Changed in juniperopenstack:
importance: Undecided → High
status: New → In Progress

By design we are not touching FW on onPrem host. This decision was made 2 months ago. Currently the playbook for ansible looks like this:

 - name: Deploy iptables
   hosts: gateways
   become: yes
     - role: iptables
       when: provider != "onprem"

This change was applied to not break existing FW configuration.

Sanju Abraham (asanju) wrote :

As a fix for this bug, we will stop the execution of the play with the detailed message and give a possible solution to fix the issue and expect that the operator/user would fix before re-trying.

tags: added: beta-blocker
Sanju Abraham (asanju) wrote :

Moving the importance of this issue to Medium as this is seen only on 1 testbed, where an IPTables policy to deny all except the ones in the rules in the forward tables.

For OnPrem, multicloud provisioning does not set any rule, since this is a complete customer equipment and the customer would be responsible for this. As a preventative fix, we will detect the deny policy and stop the play and recommend that the IPTables be fixed.

We do not want to change IPTables on onprem CPE as I personally have seen customers not like that approach at all. Hence we will follow the standard of what ansible does, detect an issue, abort, suggest the corrective action

Jeba Paulaiyan (jebap) wrote :

As commit is not updated here, moving back to New

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers