Charmed Kubernetes should support series upgrade
Bug Description
All charms used in Charmed Kubernetes should support series upgrade.
* kubernetes-master will need to ensure that HA fail-over works as expected during the upgrade and set an appropriate status during the upgrade.
* kubernetes-worker will need to drain the node, stop the services, and set an appropriate status during the upgrade.
* kubeapi-
* easyrsa just needs to set an appropriate status (originally reported at https:/
* etcd will need to stop the service and set an appropriate status during the upgrade (originally reported at https:/
* The runtime and CNI subordinates don't really need to do anything since the node is drained by k8s-worker.
Changed in charm-kubernetes-master: | |
status: | New → In Progress |
assignee: | nobody → Cory Johns (johnsca) |
Changed in charm-kubeapi-load-balancer: | |
assignee: | nobody → Cory Johns (johnsca) |
Changed in charm-etcd: | |
assignee: | nobody → Cory Johns (johnsca) |
Changed in charm-easyrsa: | |
assignee: | nobody → Cory Johns (johnsca) |
Changed in charm-kubernetes-master: | |
milestone: | none → 1.19 |
Changed in charm-kubeapi-load-balancer: | |
milestone: | none → 1.19 |
Changed in charm-etcd: | |
milestone: | none → 1.19 |
Changed in charm-easyrsa: | |
milestone: | none → 1.19 |
importance: | Undecided → Medium |
importance: | Medium → Undecided |
status: | New → Triaged |
Changed in charm-etcd: | |
status: | New → Triaged |
Changed in charm-kubeapi-load-balancer: | |
status: | New → Triaged |
Changed in charm-etcd: | |
status: | Triaged → In Progress |
Changed in charm-easyrsa: | |
status: | Triaged → In Progress |
Changed in charm-aws-integrator: | |
assignee: | nobody → Cory Johns (johnsca) |
status: | New → Triaged |
Changed in charm-aws-iam: | |
assignee: | nobody → Cory Johns (johnsca) |
milestone: | none → 1.19 |
Changed in charm-aws-integrator: | |
milestone: | none → 1.19 |
Changed in charm-calico: | |
assignee: | nobody → Cory Johns (johnsca) |
milestone: | none → 1.19 |
status: | New → Triaged |
Changed in charm-canal: | |
assignee: | nobody → Cory Johns (johnsca) |
milestone: | none → 1.19 |
status: | New → Triaged |
Changed in charm-containerd: | |
assignee: | nobody → Cory Johns (johnsca) |
milestone: | none → 1.19 |
status: | New → Triaged |
Changed in charm-docker: | |
assignee: | nobody → Cory Johns (johnsca) |
milestone: | none → 1.19 |
status: | New → Triaged |
Changed in charm-flannel: | |
assignee: | nobody → Cory Johns (johnsca) |
milestone: | none → 1.19 |
status: | New → Triaged |
Changed in charm-gcp-integrator: | |
assignee: | nobody → Cory Johns (johnsca) |
milestone: | none → 1.19 |
status: | New → Triaged |
Changed in charm-kata: | |
assignee: | nobody → Cory Johns (johnsca) |
milestone: | none → 1.19 |
status: | New → Triaged |
Changed in charm-keepalived: | |
assignee: | nobody → Cory Johns (johnsca) |
milestone: | none → 1.19 |
Changed in charm-openstack-integrator: | |
assignee: | nobody → Cory Johns (johnsca) |
milestone: | none → 1.19 |
status: | New → Triaged |
Changed in charm-tigera-secure-ee: | |
assignee: | nobody → Cory Johns (johnsca) |
milestone: | none → 1.19 |
status: | New → Triaged |
Changed in charm-vsphere-integrator: | |
assignee: | nobody → Cory Johns (johnsca) |
milestone: | none → 1.19 |
status: | New → Triaged |
Changed in charm-flannel: | |
status: | Triaged → In Progress |
Changed in charm-calico: | |
status: | Triaged → In Progress |
Changed in charm-canal: | |
status: | Triaged → In Progress |
Changed in charm-containerd: | |
status: | Triaged → In Progress |
Changed in charm-docker: | |
status: | Triaged → In Progress |
Changed in charm-azure-integrator: | |
assignee: | nobody → Cory Johns (johnsca) |
milestone: | none → 1.19 |
status: | New → Triaged |
Changed in layer-docker-registry: | |
assignee: | nobody → Cory Johns (johnsca) |
milestone: | none → 1.19 |
Changed in charm-aws-integrator: | |
status: | Triaged → In Progress |
Changed in charm-azure-integrator: | |
status: | Triaged → In Progress |
Changed in charm-gcp-integrator: | |
status: | Triaged → In Progress |
Changed in charm-kata: | |
status: | Triaged → In Progress |
Changed in charm-openstack-integrator: | |
status: | Triaged → In Progress |
Changed in charm-vsphere-integrator: | |
status: | Triaged → In Progress |
Changed in charm-aws-iam: | |
status: | New → In Progress |
Changed in layer-docker-registry: | |
status: | New → In Progress |
Changed in charm-keepalived: | |
status: | New → In Progress |
Changed in charm-tigera-secure-ee: | |
status: | Triaged → In Progress |
Changed in charm-kubeapi-load-balancer: | |
status: | Triaged → In Progress |
tags: | added: review-needed |
Changed in charm-azure-integrator: | |
status: | In Progress → Fix Committed |
Changed in charm-aws-integrator: | |
status: | In Progress → Fix Committed |
Changed in charm-aws-iam: | |
status: | In Progress → Fix Committed |
Changed in layer-docker-registry: | |
status: | In Progress → Fix Committed |
Changed in charm-gcp-integrator: | |
status: | In Progress → Fix Committed |
Changed in charm-kubernetes-worker: | |
status: | In Progress → Fix Committed |
Changed in charm-openstack-integrator: | |
status: | In Progress → Fix Committed |
Changed in charm-calico: | |
status: | In Progress → Fix Committed |
Changed in charm-containerd: | |
status: | In Progress → Fix Committed |
Changed in charm-etcd: | |
status: | In Progress → Fix Committed |
Changed in charm-flannel: | |
status: | In Progress → Fix Committed |
Changed in charm-kata: | |
status: | In Progress → Fix Committed |
Changed in charm-keepalived: | |
status: | In Progress → Fix Committed |
Changed in charm-kubeapi-load-balancer: | |
status: | In Progress → Fix Committed |
Changed in charm-kubernetes-master: | |
status: | In Progress → Fix Committed |
Changed in charm-vsphere-integrator: | |
status: | In Progress → Fix Committed |
Changed in charm-docker: | |
status: | In Progress → Fix Committed |
Changed in charm-easyrsa: | |
status: | In Progress → Fix Committed |
Changed in charm-tigera-secure-ee: | |
status: | In Progress → Fix Committed |
Changed in charm-canal: | |
status: | In Progress → Fix Committed |
tags: | removed: review-needed |
Changed in charm-aws-iam: | |
status: | Fix Committed → Fix Released |
Changed in charm-aws-integrator: | |
status: | Fix Committed → Fix Released |
Changed in charm-azure-integrator: | |
status: | Fix Committed → Fix Released |
Changed in charm-calico: | |
status: | Fix Committed → Fix Released |
Changed in charm-canal: | |
status: | Fix Committed → Fix Released |
Changed in charm-containerd: | |
status: | Fix Committed → Fix Released |
Changed in layer-docker-registry: | |
status: | Fix Committed → Fix Released |
Changed in charm-docker: | |
status: | Fix Committed → Fix Released |
Changed in charm-easyrsa: | |
status: | Fix Committed → Fix Released |
Changed in charm-etcd: | |
status: | Fix Committed → Fix Released |
Changed in charm-flannel: | |
status: | Fix Committed → Fix Released |
Changed in charm-gcp-integrator: | |
status: | Fix Committed → Fix Released |
Changed in charm-kata: | |
status: | Fix Committed → Fix Released |
Changed in charm-keepalived: | |
status: | Fix Committed → Fix Released |
Changed in charm-kubeapi-load-balancer: | |
status: | Fix Committed → Fix Released |
Changed in charm-kubernetes-master: | |
status: | Fix Committed → Fix Released |
Changed in charm-kubernetes-worker: | |
status: | Fix Committed → Fix Released |
Changed in charm-openstack-integrator: | |
status: | Fix Committed → Fix Released |
Changed in charm-tigera-secure-ee: | |
status: | Fix Committed → Fix Released |
Changed in charm-vsphere-integrator: | |
status: | Fix Committed → Fix Released |
Changed in charm-keepalived: | |
milestone: | 1.19 → 1.20+ck1 |
status: | Fix Released → Fix Committed |
Changed in charm-keepalived: | |
status: | Fix Committed → Fix Released |
While it would be ideal to have the subordinates stop their services during the upgrade, the pre-series-upgrade hook seems to fire on subordinates before the principal. Because of this, stopping the services would interfere with the worker's ability to drain the pods. During testing, leaving them running didn't have any obvious impact on the cluster's function.