Turn off allocate-node-cidrs in kube-controller-manager
It was detected the generation of the error message "CIDR allocation
failed, there are no remaining CIDRs left to allocate in the accepted
range" with high occurence in the log.
By default allocate-node-cidrs is turned on to allocate address ranges
for each node. But if cluster-cidr is set to a /64 (or /24 for IPv4
install) mask with node-cidr-mask-size using the default 64 (or /24
in IPv4) there will be no available range for the other nodes,
controller-0 receives all available addresses for the pods.
But StarlingX uses calico as the CNI solution and it does not use the
allocated addresses from kube-controller-manager to distribute
addresses among the pods. That is why this configuration was not
noticed.
This change turn off the value during install to remove the error
message.
Test Plan
[PASS] install an AIO-DX in IPv6 and validate that there is no more CIDRNotAvailable events in the k8s nodes (using kubectl
describe node controller-1)
[PASS] validate that there are no podCIDR address range allocated to
k8s nodes (with kubectl get nodes \
-o jsonpath='{range .items[*]}{@.metadata.name}{"\t"}\ {@.spec.podCIDR}{"\n"}{end}'
[PASS] create user pods and validate that the address allocation is
done to all pods in all nodes, following the ranges determined
by calico (with kubectl get blockaffinities)
Closes-Bug: 2044115
Change-Id: I31e900757e9a94d21fb86e533b6ba46ebfe50e39
Signed-off-by: Andre Kantek <email address hidden>
Reviewed: https:/ /review. opendev. org/c/starlingx /ansible- playbooks/ +/900874 /opendev. org/starlingx/ ansible- playbooks/ commit/ 1acbbb1e89f7506 c11be788d17545a 0251eb8556
Committed: https:/
Submitter: "Zuul (22348)"
Branch: master
commit 1acbbb1e89f7506 c11be788d17545a 0251eb8556
Author: akantek <email address hidden>
Date: Fri Nov 10 15:17:50 2023 -0300
Turn off allocate-node-cidrs in kube-controller -manager
It was detected the generation of the error message "CIDR allocation
failed, there are no remaining CIDRs left to allocate in the accepted
range" with high occurence in the log.
By default allocate-node-cidrs is turned on to allocate address ranges
for each node. But if cluster-cidr is set to a /64 (or /24 for IPv4
install) mask with node-cidr-mask-size using the default 64 (or /24
in IPv4) there will be no available range for the other nodes,
controller-0 receives all available addresses for the pods.
But StarlingX uses calico as the CNI solution and it does not use the -manager to distribute
allocated addresses from kube-controller
addresses among the pods. That is why this configuration was not
noticed.
This change turn off the value during install to remove the error
message.
Test Plan
CIDRNotAvai lable events in the k8s nodes (using kubectl *]}{@.metadata. name}{" \t"}\
{ @.spec. podCIDR} {"\n"}{ end}'
[PASS] install an AIO-DX in IPv6 and validate that there is no more
describe node controller-1)
[PASS] validate that there are no podCIDR address range allocated to
k8s nodes (with kubectl get nodes \
-o jsonpath='{range .items[
[PASS] create user pods and validate that the address allocation is
done to all pods in all nodes, following the ranges determined
by calico (with kubectl get blockaffinities)
Closes-Bug: 2044115
Change-Id: I31e900757e9a94 d21fb86e533b6ba 46ebfe50e39
Signed-off-by: Andre Kantek <email address hidden>