commit 1c39bc6c894edbbd9d9ef0f679e8e0b213789c6c
Author: Andre Kantek <email address hidden>
Date: Mon Nov 13 16:03:06 2023 -0300
Turn off allocate-node-cidrs in kube-controller-manager
It was detected the generation of the error message "CIDR allocation
failed, there are no remaining CIDRs left to allocate in the accepted
range" with high occurence in the log.
By default allocate-node-cidrs is turned on to allocate address ranges
for each node. But if cluster-cidr is set to a /64 (or /24 for IPv4
install) mask with node-cidr-mask-size using the default 64 (or /24
in IPv4) there will be no available range for the other nodes,
controller-0 receives all available addresses for the pods.
But StarlingX uses calico as the CNI solution and it does not use the
allocated addresses from kube-controller-manager to distribute
addresses among the pods. That is why this configuration was not
noticed.
This change turn off the value on existing installations to remove the
error message if inserted as a sw patch.
Test Plan
[PASS] install an AIO-DX in IPv6 and apply this change as a patch and
validate that the occurrence of CIDRNotAvailable events in the
k8s nodes stops growing (kubectl describe node controller-1)
[PASS] create user pods and validate that the address allocation is
done to all pods in all nodes, following the ranges determined
by calico (with kubectl get blockaffinities)
Closes-Bug: 2044115
Change-Id: I62b0cb482873703b4b266708fdba279aafd7c5c1
Signed-off-by: Andre Kantek <email address hidden>
Reviewed: https:/ /review. opendev. org/c/starlingx /stx-puppet/ +/900875 /opendev. org/starlingx/ stx-puppet/ commit/ 1c39bc6c894edbb d9d9ef0f679e8e0 b213789c6c
Committed: https:/
Submitter: "Zuul (22348)"
Branch: master
commit 1c39bc6c894edbb d9d9ef0f679e8e0 b213789c6c
Author: Andre Kantek <email address hidden>
Date: Mon Nov 13 16:03:06 2023 -0300
Turn off allocate-node-cidrs in kube-controller -manager
It was detected the generation of the error message "CIDR allocation
failed, there are no remaining CIDRs left to allocate in the accepted
range" with high occurence in the log.
By default allocate-node-cidrs is turned on to allocate address ranges
for each node. But if cluster-cidr is set to a /64 (or /24 for IPv4
install) mask with node-cidr-mask-size using the default 64 (or /24
in IPv4) there will be no available range for the other nodes,
controller-0 receives all available addresses for the pods.
But StarlingX uses calico as the CNI solution and it does not use the -manager to distribute
allocated addresses from kube-controller
addresses among the pods. That is why this configuration was not
noticed.
This change turn off the value on existing installations to remove the
error message if inserted as a sw patch.
Test Plan
[PASS] install an AIO-DX in IPv6 and apply this change as a patch and
validate that the occurrence of CIDRNotAvailable events in the
k8s nodes stops growing (kubectl describe node controller-1)
[PASS] create user pods and validate that the address allocation is
done to all pods in all nodes, following the ranges determined
by calico (with kubectl get blockaffinities)
Closes-Bug: 2044115
Change-Id: I62b0cb48287370 3b4b266708fdba2 79aafd7c5c1
Signed-off-by: Andre Kantek <email address hidden>