Dual Stack node-ip parameter is incorrect

Bug #2012007 reported by Jeremy Brisko
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Kubernetes Common Layer
Triaged
Medium
Unassigned
Kubernetes Control Plane Charm
Triaged
Medium
Unassigned
Kubernetes Worker Charm
Triaged
Medium
Unassigned

Bug Description

When deploying dual stack Charmed Kubernetes the kubelet node-ip argument stays as either an IPv4 address, or an IPv6 address. This should be modified in dual stack to be (ipv4), (ipv6). The node needs to know about both addresses for pods using host networking. Otherwise they think they are running on a single stack network. As far as I can tell the function get_node_ip (https://github.com/charmed-kubernetes/layer-kubernetes-common/blob/ee2c0dc0bac647514c92148706d0887ba925c4cf/lib/charms/layer/kubernetes_common.py#L968) just needs to check for dual stack and apply both ingress addresses to the parameter.

Revision history for this message
George Kraft (cynerva) wrote :

Thanks for the report. We will need to be careful to make sure we don't break other code that calls get_node_ip. The only other reference I can find is in kubernetes-worker[1], which will need to be updated to properly account for multiple addresses.

[1]: https://github.com/charmed-kubernetes/charm-kubernetes-worker/blob/5f408eefde950022afca62ad077650659b81da63/reactive/kubernetes_worker.py#L597

Changed in layer-kubernetes-common:
importance: Undecided → Medium
Changed in charm-kubernetes-master:
importance: Undecided → Medium
Changed in charm-kubernetes-worker:
importance: Undecided → Medium
Changed in layer-kubernetes-common:
status: New → Triaged
Changed in charm-kubernetes-master:
status: New → Triaged
Changed in charm-kubernetes-worker:
status: New → Triaged
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.