k8s:Compute node reboots when ping initiated from pod to/ from mx when ip_fabric_forwarding is enabled
Affects | Status | Importance | Assigned to | Milestone | ||
---|---|---|---|---|---|---|
Juniper Openstack | Status tracked in Trunk | |||||
R5.0 |
Fix Released
|
High
|
Saurabh | |||
Trunk |
Fix Released
|
High
|
Saurabh |
Bug Description
orchestrator :k8s
build :ocata-master-117
hostOS :centos7.5
Topology
========
master -----192.168.10.2 -------
pod :192.168.10.20
fabric network :192.168.10.32/29
on the destination machine
=======
11:08:08.062972 Out arp who-has 192.168.10.20 tell 192.168.10.100
11:08:08.563473 In IP 192.168.10.20 > 192.168.10.100: ICMP echo request, id 181, seq 248, length 64
00:25:90:c3:af:ab 192.168.10.1 192.168.10.1 ge-1/2/2.10 none
00:25:90:c3:08:6b 192.168.10.2 192.168.10.2 ge-1/2/2.10 none
00:25:90:c3:3f:13 192.168.10.3 192.168.10.3 ge-1/2/2.10 none
00:25:90:c5:10:92 192.168.10.5 192.168.10.5 ge-1/2/2.10 none
on the node
===========
6:39:41.814551 IP 192.168.10.20 > 192.168.10.100: ICMP echo request, id 181, seq 999, length 64
16:39:42.814554 IP 192.168.10.20 > 192.168.10.100: ICMP echo request, id 181, seq 1000, length 64
[root@nodei16 /]# arp
Address HWtype HWaddress Flags Mask Iface
gateway ether 80:ac:ac:f0:a2:c1 C eno1
192.168.10.254 (incomplete) vhost0
nodec20.local ether 00:25:90:c3:08:6a C eno1
192.168.10.2 ether 00:25:90:c3:08:6b C vhost0
nodei40.
192.168.10.3 ether 00:25:90:c3:3f:13 C vhost0
puppet ether 00:e0:81:ca:5a:75 C eno1
192.168.10.6 ether 00:25:90:e4:08:8e C vhost0
nodei21.
192.168.10.100 ether 4c:96:14:98:23:e1 C vhost0
10.204.217.240 ether 80:71:1f:c0:38:70 C eno1
252132<=>474640 192.168.10.20:181 1 (0)
(Gen: 1, K(nh):35, Action:F, Flags:, QOS:-1, S(nh):35, Stats:3547/347606,
SPort 64936, TTL 0, Sinfo 4.0.0.0)
474640<=>252132 192.168.10.100:181 1 (0)
(Gen: 1, K(nh):35, Action:F, Flags:, QOS:-1, S(nh):15, Stats:0/0, SPort 52383,
TTL 0, Sinfo 0.0.0.0)
[root@nodei16 /]# dropstats
Invalid IF 0
Trap No IF 0
IF TX Discard 0
IF Drop 0
IF RX Discard 0
Flow Unusable 0
Flow No Memory 0
Flow Table Full 0
Flow NAT no rflow 0
Flow Action Drop 0
Flow Action Invalid 0
Flow Invalid Protocol 0
Flow Queue Limit Exceeded 0
New Flow Drops 0
Flow Unusable (Eviction) 0
Original Packet Trapped 0
Discards 64699<<
TTL Exceeded 0
Mcast Clone Fail 0
Cloned Original 16
2018-05-28 Mon 16:54:54:434.828 IST nodei16 [Thread 139701471885056, Pid 22313]: SANDESH: Send FAILED: 1527506694434304 [SYS_NOTICE]: SandeshModuleCl
2018-05-28 Mon 16:54:54:500.201 IST nodei16 [Thread 139701653199168, Pid 22313]: XMPP [SYS_NOTICE]: XmppEventLog: XMPP Peer nodei16:
2018-05-28 Mon 16:54:56:476.384 IST nodei16 [Thread 139701653199168, Pid 22313]: XMPP [SYS_NOTICE]: XmppEventLog: XMPP Peer nodei16:
2018-05-28 Mon 16:59:38:969.607 IST nodei16 [Thread 139701471885056, Pid 22313]: ID-PERM not set for object <default-
=======
yaml
=======
global_
REGISTRY_
CONTAINER_
provider_config:
bms:
ssh_pwd: c0ntrail123
ssh_user: root
ssh_public_key: /root/.
ssh_
domainsuffix: local
instances:
nodec19:
provider: bms
ip: 10.204.217.4
roles:
config:
control:
analytics:
webui:
k8s_master:
kubemanager:
nodec21:
provider: bms
ip: 10.204.217.6
roles:
k8s_node:
vrouter:
contrail_
CONTAINER_
CONTRAIL_VERSION: ocata-master-115
KUBERNETES_
KUBERNETES_
CLOUD_
CONTROLLER_NODES: 10.204.217.4
CONTROL_NODES: 192.168.10.1
VROUTER_GATEWAY: 192.168.10.100
tags: | added: sanityblocker |
summary: |
- k8s:Unable to resolve the ARP for the pods ine the fabric netowork with - control and data interfaces + k8s:Unable to resolve the ARP for the pods when ip fabric forwarding is + enabled |
summary: |
- k8s:Unable to resolve the ARP for the pods when ip fabric forwarding is - enabled + k8s:Compute node reboots when ping initiated from pod or from mx when + ip_fabric_forwarding is enabled |
summary: |
- k8s:Compute node reboots when ping initiated from pod or from mx when + k8s:Compute node reboots when ping initiated from pod to/ from mx when ip_fabric_forwarding is enabled |
investigating little more on my setup ...will update