limit the number of auth-webhook worker processes
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
| Kubernetes Master Charm |
High
|
Kevin W Monroe |
Bug Description
Gunicorn recommends setting (2*cores)+1 as the number of workers for an application:
https:/
It also says that 4-12 workers should be enough to handle 1000s of reqs per second. K8s master currently does not put a ceiling on the number of workers it spawns, so a 64-core system will fire up 129 workers. This is overkill.
Changed in charm-kubernetes-master: | |
status: | New → In Progress |
assignee: | nobody → Kevin W Monroe (kwmonroe) |
milestone: | none → 1.20 |
Changed in charm-kubernetes-master: | |
importance: | Undecided → High |
status: | In Progress → Fix Committed |
tags: | removed: review-needed |
Peter De Sousa (pjds) wrote : | #2 |
subscribing field high as this is affecting a kubeflow deployment
Chris Sanders (chris.sanders) wrote : | #3 |
Peter if you are experience a definitive impact to the deployment please provide logs or some description of that effect. This is landing in 1.20 either way, but as far as I'm aware this is a matter of preventative adjustment to avoid issues.
Peter De Sousa (pjds) wrote : | #4 |
Adding some context to how this bug is exhibiting itself on a deployment.
We have machines with 49 threads per core with 2 cores per machine (resulting in 97 gunicorn workers per master) - this is exhibiting itself with quite persistent timeouts on kubectl or internal calls to etcd whereby we see 'context exceeded'. When the number was reduced the timeouts were also reduced significantly.
Changed in charm-kubernetes-master: | |
status: | Fix Committed → Fix Released |
PR for review:
https:/ /github. com/charmed- kubernetes/ charm-kubernete s-master/ pull/130