Multicloud :: Azure OnPrem :: Azure GW HA same subnet

Bug #1799245 reported by Ritam Gangopadhyay
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Juniper Openstack
Status tracked in Trunk
R5.0
Fix Released
High
Sanju Abraham
Trunk
Fix Released
High
Sanju Abraham

Bug Description

When we bring up GW HA on azure cloud with both GW in same subnet below:-

    - name: WestUS2
      resource_group: contrail-test-west-us-2
      vnet:
        - name: rg-vpc-1
          cidr_block: 172.16.1.0/24
          subnets:
            - name: rg-subnet-1
              cidr_block: 172.16.1.0/25
              security_group: rg-sg-1
          security_groups:
            - name: rg-sg-1
              rules:
                - name: rg-all_in
                  direction: inbound
                - name: rg-all_out
                  direction: outbound
          instances:
            - name: rg-gw-1
              provision: true
              username: ubuntu
              os: ubuntu16
              os_version: 16.04.201705080
              instance_type: Standard_F2
              subnets: rg-subnet-1
              interface: eth1
              roles:
               - gateway
              protocols_mode:
                - ssl_server
                - ipsec_server
                - ipsec_client
            - name: rg-gw-2
              provision: true
              username: ubuntu
              os: ubuntu16
              os_version: 16.04.201705080
              instance_type: Standard_F2
              subnets: rg-subnet-1
              interface: eth1
              roles:
               - gateway
              protocols_mode:
                - ssl_server
                - ipsec_server
                - ipsec_client

#############################
#############################
#############################

Connectivity from OnPrem controller to the azure computes is unreachable. Once we bring down the vrouter on the Azure GWs the connectivity comes up.

Inventory file Snip:-

gateways:
  hosts:
    172.16.1.4:
      ansible_host: 13.66.254.206
      ansible_ssh_common_args: -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
      ansible_ssh_pass: null
      ansible_ssh_pipelining: true
      ansible_user: ubuntu
      bgp_rr_peers:
      - 100.65.0.1
      cloud: rg-vpc-1
      deploy: true
      ipsec_remote_clients:
      - 172.16.1.68
      ipsec_remote_servers: []
      local_ip: 172.16.1.4
      local_lan: 172.16.1.0/26
      local_lans:
      - 172.16.1.0/26
      protocols_mode:
      - ssl_server
      - ipsec_server
      - ipsec_client
      provider: azure
      public_ip: 13.66.254.206
      services: []
      ssl_remote_clients:
      - 10.87.74.132
      ssl_remote_servers: []
      vpn_ip: 100.64.0.2
      vpn_lo_ip: 100.65.0.2
      vrrp_peer_ip:
      - 172.16.1.68
      wan_ip: 172.16.1.69
    172.16.1.7:
      ansible_host: 13.66.250.70
      ansible_ssh_common_args: -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
      ansible_ssh_pass: null
      ansible_ssh_pipelining: true
      ansible_user: ubuntu
      bgp_rr_peers:
      - 100.65.0.1
      cloud: rg-vpc-1
      deploy: true
      ipsec_remote_clients: []
      ipsec_remote_servers:
      - 172.16.1.69
      local_ip: 172.16.1.7
      local_lan: 172.16.1.0/26
      local_lans:
      - 172.16.1.0/26
      protocols_mode:
      - ssl_server
      - ipsec_server
      - ipsec_client
      provider: azure
      public_ip: 13.66.250.70
      services: []
      ssl_remote_clients:
      - 10.87.74.132
      ssl_remote_servers: []
      vpn_ip: 100.64.0.3
      vpn_lo_ip: 100.65.0.3
      vrrp_peer_ip:
      - 172.16.1.69
      wan_ip: 172.16.1.68
    192.168.2.1:
      ansible_host: 10.87.74.132
      ansible_ssh_common_args: -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
      ansible_ssh_pass: c0ntrail123
      ansible_ssh_pipelining: true
      ansible_user: root
      bgp_rr_peers:
      - 100.65.0.2
      - 100.65.0.3
      cloud: null
      deploy: true
      ipsec_remote_clients: []
      ipsec_remote_servers: []
      local_ip: 192.168.2.1
      local_lan: 192.168.2.0/24
      local_lans:
      - 192.168.1.0/24
      - 192.168.2.0/24
      protocols_mode:
      - ssl_client
      provider: onprem
      public_ip: 10.87.74.132
      services:
      - BGP_RR
      ssl_remote_clients: []
      ssl_remote_servers:
      - 13.66.250.70
      - 13.66.254.206
      vpn_ip: 100.64.0.1
      vpn_lo_ip: 100.65.0.1
      vrrp_peer_ip: []
      wan_ip: 10.87.74.132

Revision history for this message
Sanju Abraham (asanju) wrote :

Pull Request - https://github.com/Juniper/contrail-multi-cloud/pull/458 addresses the issue.

The issue was seen due to not considering the network to which the compute belongs. This caused
the code to have vrouter gateway list those nodes that in the same network as itself.

In AWS and Azure, the vrouter gateway for the instances spawned in the network will be the same,
hence it is important to not add them in the interface route table for the vhost VMI on the
gateway.

The generated vars.yml that gives the details of the gateway and computes connected to the GW now looks like

controller: 192.168.1.1
controller_gws:
- 192.168.2.1
controller_lans:
- 192.168.1.0/24
controllers:
- 192.168.1.1
gateways_remote_info:
  172.16.1.4:
    remote_compute_node:
    - 192.168.1.2/32
    remote_gateway:
    - 172.16.1.7/32
    - 192.168.2.1/32
  172.16.1.7:
    remote_compute_node:
    - 192.168.1.2/32
    remote_gateway:
    - 172.16.1.4/32
    - 192.168.2.1/32
  192.168.2.1:
    remote_compute_node:
    - 172.16.1.6/32
    - 172.16.1.5/32
    remote_gateway:
    - 172.16.1.7/32
    - 172.16.1.4/32
k8s_master: 192.168.1.1
remote_gws:
- 172.16.1.7
- 172.16.1.4
- 192.168.2.1

In this generated yaml, there is not listing of the compute nodes since they are in the same subnet as that of the gateways.

Revision history for this message
Sanju Abraham (asanju) wrote :
Revision history for this message
Ritam Gangopadhyay (ritam) wrote :

fix verified in 5.0.2-0.309

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.