Comment 3 for bug 2012689

Revision history for this message
DingGGu (dinggggu) wrote :

Hello. I'm using Karpenter for auto-scaling. Build server (a.k.a CI) workers are operated in the form of Kubernetes Pods, and Pods are created when a build request is received, and when the Pod is in the Pending state, a new instance is started by Karpenter.
Dozens of build requests are created at once, and in this case, multiple instances are started and when job is done, removed by Karpenter.
Sometimes, instance cannot join to cluster.

Karpenter does not replacing bootstrap.sh and I just give to you instance user-data for you.

--//
Content-Type: text/x-shellscript; charset="us-ascii"

#!/bin/bash -xe
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
/etc/eks/bootstrap.sh '#REDACT#' --apiserver-endpoint 'https://#REDACT#.eks.amazonaws.com' --b64-cluster-ca '....' \
--container-runtime containerd \
--kubelet-extra-args '--node-labels=karpenter.sh/capacity-type=on-demand,karpenter.sh/provisioner-name=#REDACT# --register-with-taints=dedicated=#REDACT#:NoSchedule'
--//--