I believe I've tracked this down to Juju's ordering of items in the ConfigMap that it mounts into the workload pod as files. As an example, here's snippets from two different calls that Juju does periodically to keep the Deployment in sync with what's been defined in the pod spec:
I believe I've tracked this down to Juju's ordering of items in the ConfigMap that it mounts into the workload pod as files. As an example, here's snippets from two different calls that Juju does periodically to keep the Deployment in sync with what's been defined in the pod spec:
Snippet #1: api-samples- config" , join.yaml" , join.yaml" training_ cm.yaml" , training_ cm.yaml" yaml",
"configMap": {
"name": "pipelines-
"items": [
{
"key": "parallel_
"path": "parallel_
}, {
"key": "sequential.yaml",
"path": "sequential.yaml"
}, {
"key": "xgboost_
"path": "xgboost_
}, {
"key": "condition.yaml",
"path": "condition.yaml"
}, {
"key": "exit_handler.
"path": "exit_handler.yaml"
}
],
"defaultMode": 420
}
Snippet #2: api-samples- config" , training_ cm.yaml" , training_ cm.yaml" yaml", join.yaml" , join.yaml"
"configMap": {
"name": "pipelines-
"items": [
{
"key": "sequential.yaml",
"path": "sequential.yaml"
}, {
"key": "xgboost_
"path": "xgboost_
}, {
"key": "condition.yaml",
"path": "condition.yaml"
}, {
"key": "exit_handler.
"path": "exit_handler.yaml"
}, {
"key": "parallel_
"path": "parallel_
}
],
"defaultMode": 420
}
Notice how `parallel_ join.yaml` has changed positions, which makes Kubernetes think the Deployment needs updating.