Comment 2 for bug 1870590

Revision history for this message
Elvinas (elvinas-3) wrote :

As expected changing Calico CIDR did not help. Networking works on worker node, where pod runs, regardless of CIDR. Networking to metrics server does not work on master and another worker host. Not sure if it should be that way, i.e. hosts not supposed to reach pod networks directly.

As control plane are not run as containers, but run directly and there is no Docker environment, not yet sure how to run debug container attached to same namespace. Will leave that for next week. :)

-------------
juju config calico | grep "cidr:" -A8
  cidr:
    default: 192.168.0.0/16
    description: |
      Network CIDR assigned to Calico. This is applied to the default Calico
      pool, and is also communicated to the Kubernetes charms for use in
      kube-proxy configuration.
    source: user
    type: string
    value: 10.100.0.0/16
-----------------

On a master host I do see the following routes and I can ping the gateway to metrics server subnet IP. But metrics server does not respond. Same thing on another worker node. Another worker node cannot reach the pod.

-------------------
ubuntu@super-cub:~$ ip r l
default via 192.168.101.1 dev eth0 proto static
10.100.69.128/26 via 192.168.101.18 dev eth0 proto bird
>>>>10.100.89.128/26 via 192.168.101.17 dev eth0 proto bird <<<<
192.168.101.0/24 dev eth0 proto kernel scope link src 192.168.101.19

>>>>ubuntu@super-cub:~$ ping 192.168.101.17 <<<<<
PING 192.168.101.17 (192.168.101.17) 56(84) bytes of data.
64 bytes from 192.168.101.17: icmp_seq=1 ttl=64 time=0.598 ms
64 bytes from 192.168.101.17: icmp_seq=2 ttl=64 time=0.333 ms
^C
--- 192.168.101.17 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1027ms
rtt min/avg/max/mdev = 0.333/0.465/0.598/0.134 ms

ubuntu@super-cub:~$ ping 10.100.89.134
PING 10.100.89.134 (10.100.89.134) 56(84) bytes of data.
^C
--- 10.100.89.134 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1011ms
-------------------------------

On a worker host I am not sure about routes. Not sure what means blackhole (need more RTFM :D), but metrics server responds properly on a worker host, __where pod runs__.

-----------------------------
ubuntu@fair-fish:~$ ip r l
default via 192.168.101.1 dev eth0 proto static
10.100.69.128/26 via 192.168.101.18 dev eth0 proto bird
10.100.89.128 dev calia1e8dd009f6 scope link
>>>>>>>>>blackhole 10.100.89.128/26 proto bird <<<<<<<<<<<<<<
10.100.89.130 dev calif4dc95e3deb scope link
10.100.89.131 dev caliec7dab409c0 scope link
10.100.89.132 dev cali95554be076a scope link
>>>>>>>>>10.100.89.134 dev cali104c2cee7b4 scope link <<<<<<<<<<< This is the pod
10.100.89.135 dev calif5b837d795e scope link
10.100.89.136 dev calia2d4e67ab0d scope link
192.168.101.0/24 dev eth0 proto kernel scope link src 192.168.101.17

ubuntu@fair-fish:~$ ping 10.100.89.134
PING 10.100.89.134 (10.100.89.134) 56(84) bytes of data.
64 bytes from 10.100.89.134: icmp_seq=1 ttl=64 time=0.083 ms
64 bytes from 10.100.89.134: icmp_seq=2 ttl=64 time=0.064 ms
^C
--- 10.100.89.134 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1019ms
rtt min/avg/max/mdev = 0.064/0.073/0.083/0.012 ms
--------------------------------