systemctl restart configure-agent-env.service fails on master and worker node
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Magnum Charm |
In Progress
|
High
|
Felipe Reyes |
Bug Description
Once the magnum can create the cluster. sometimes master and worker nodes are in an unhealthy state. Log into the any master or worker node and check the status.
systemctl status configure-
systemctl restart configure-
When nodes are in a healthy state.
ssh into any node.
systemctl restart configure-
edit file configure-
change mkdir /etc/kubernetes/ to mkdir -p /etc/kubernetes/ and service can be restarted.
this issue causes customers to be unable to list the cluster as healthy even though the kubectl command gets the container.
description: | updated |
After updating the file fcct-config.yaml it’s necessary to update the “compiled” version in user_data.json with the following command:
podman run --rm -v $(pwd)/ fcct-config. yaml:/config. fcc quay.io/ coreos/ fcct:release --pretty --strict /config.fcc > ./user_data.json
I'm testing the change in a lab environment, will follow up with the results.