Multicloud :: Azure OnPrem :: Azure GW HA same subnet
Affects | Status | Importance | Assigned to | Milestone | ||
---|---|---|---|---|---|---|
Juniper Openstack | Status tracked in Trunk | |||||
R5.0 |
Fix Released
|
High
|
Sanju Abraham | |||
Trunk |
Fix Released
|
High
|
Sanju Abraham |
Bug Description
When we bring up GW HA on azure cloud with both GW in same subnet below:-
- name: WestUS2
resource_
vnet:
- name: rg-vpc-1
subnets:
- name: rg-subnet-1
- name: rg-sg-1
- name: rg-all_in
- name: rg-all_out
- name: rg-gw-1
os: ubuntu16
- gateway
- ssl_server
- ipsec_server
- ipsec_client
- name: rg-gw-2
os: ubuntu16
- gateway
- ssl_server
- ipsec_server
- ipsec_client
#######
#######
#######
Connectivity from OnPrem controller to the azure computes is unreachable. Once we bring down the vrouter on the Azure GWs the connectivity comes up.
Inventory file Snip:-
gateways:
hosts:
172.16.1.4:
ansible_host: 13.66.254.206
ansible_
ansible_
ansible_
ansible_user: ubuntu
bgp_rr_peers:
- 100.65.0.1
cloud: rg-vpc-1
deploy: true
ipsec_
- 172.16.1.68
ipsec_
local_ip: 172.16.1.4
local_lan: 172.16.1.0/26
local_lans:
- 172.16.1.0/26
protocols
- ssl_server
- ipsec_server
- ipsec_client
provider: azure
public_ip: 13.66.254.206
services: []
ssl_
- 10.87.74.132
ssl_
vpn_ip: 100.64.0.2
vpn_lo_ip: 100.65.0.2
vrrp_peer_ip:
- 172.16.1.68
wan_ip: 172.16.1.69
172.16.1.7:
ansible_host: 13.66.250.70
ansible_
ansible_
ansible_
ansible_user: ubuntu
bgp_rr_peers:
- 100.65.0.1
cloud: rg-vpc-1
deploy: true
ipsec_
ipsec_
- 172.16.1.69
local_ip: 172.16.1.7
local_lan: 172.16.1.0/26
local_lans:
- 172.16.1.0/26
protocols
- ssl_server
- ipsec_server
- ipsec_client
provider: azure
public_ip: 13.66.250.70
services: []
ssl_
- 10.87.74.132
ssl_
vpn_ip: 100.64.0.3
vpn_lo_ip: 100.65.0.3
vrrp_peer_ip:
- 172.16.1.69
wan_ip: 172.16.1.68
192.168.2.1:
ansible_host: 10.87.74.132
ansible_
ansible_
ansible_
ansible_user: root
bgp_rr_peers:
- 100.65.0.2
- 100.65.0.3
cloud: null
deploy: true
ipsec_
ipsec_
local_ip: 192.168.2.1
local_lan: 192.168.2.0/24
local_lans:
- 192.168.1.0/24
- 192.168.2.0/24
protocols
- ssl_client
provider: onprem
public_ip: 10.87.74.132
services:
- BGP_RR
ssl_
ssl_
- 13.66.250.70
- 13.66.254.206
vpn_ip: 100.64.0.1
vpn_lo_ip: 100.65.0.1
vrrp_peer_ip: []
wan_ip: 10.87.74.132
Pull Request - https:/ /github. com/Juniper/ contrail- multi-cloud/ pull/458 addresses the issue.
The issue was seen due to not considering the network to which the compute belongs. This caused
the code to have vrouter gateway list those nodes that in the same network as itself.
In AWS and Azure, the vrouter gateway for the instances spawned in the network will be the same,
hence it is important to not add them in the interface route table for the vhost VMI on the
gateway.
The generated vars.yml that gives the details of the gateway and computes connected to the GW now looks like
controller: 192.168.1.1 remote_ info: compute_ node: compute_ node: compute_ node:
controller_gws:
- 192.168.2.1
controller_lans:
- 192.168.1.0/24
controllers:
- 192.168.1.1
gateways_
172.16.1.4:
remote_
- 192.168.1.2/32
remote_gateway:
- 172.16.1.7/32
- 192.168.2.1/32
172.16.1.7:
remote_
- 192.168.1.2/32
remote_gateway:
- 172.16.1.4/32
- 192.168.2.1/32
192.168.2.1:
remote_
- 172.16.1.6/32
- 172.16.1.5/32
remote_gateway:
- 172.16.1.7/32
- 172.16.1.4/32
k8s_master: 192.168.1.1
remote_gws:
- 172.16.1.7
- 172.16.1.4
- 192.168.2.1
In this generated yaml, there is not listing of the compute nodes since they are in the same subnet as that of the gateways.