I am seeing similar after re-running the setup-openstack playbooks after a rolling replacement upgrade of the control plane from xenial->bionic, with OSA rocky deployment.
May 22 10:19:11 infra1.os.mist.rd.bbc.co.uk cinder-volume[7979]: 2019-05-22 10:19:11.768 7979 WARNING cinder.volume.manager [req-43f92ce9-631a-4239-949a-4b84b69c4e01 - - - - -] Update driver status failed: (config name rbd_hdd) is uninitialized.
May 22 10:19:16 infra1.os.mist.rd.bbc.co.uk cinder-volume[7979]: 2019-05-22 10:19:16.750 7979 ERROR cinder.service [-] Manager for service cinder-volume infra1.os.mist.rd.bbc.co.uk@rbd_hdd is reporting problems, not sending heartbeat. Service will appear "down".
May 22 10:19:16 infra1.os.mist.rd.bbc.co.uk cinder-volume[7988]: 2019-05-22 10:19:16.912 7988 ERROR cinder.service [-] Manager for service cinder-volume infra1.os.mist.rd.bbc.co.uk@rbd_nvme is reporting problems, not sending heartbeat. Service will appear "down".
May 22 10:19:26 infra1.os.mist.rd.bbc.co.uk cinder-volume[7979]: 2019-05-22 10:19:26.751 7979 ERROR cinder.service [-] Manager for service cinder-volume infra1.os.mist.rd.bbc.co.uk@rbd_hdd is reporting problems, not sending heartbeat. Service will appear "down".
May 22 10:19:26 infra1.os.mist.rd.bbc.co.uk cinder-volume[7988]: 2019-05-22 10:19:26.912 7988 ERROR cinder.service [-] Manager for service cinder-volume infra1.os.mist.rd.bbc.co.uk@rbd_nvme is reporting problems, not sending heartbeat. Service will appear "down".
May 22 10:19:36 infra1.os.mist.rd.bbc.co.uk cinder-volume[7979]: 2019-05-22 10:19:36.751 7979 ERROR cinder.service [-] Manager for service cinder-volume infra1.os.mist.rd.bbc.co.uk@rbd_hdd is reporting problems, not sending heartbeat. Service will appear "down".
May 22 10:19:36 infra1.os.mist.rd.bbc.co.uk cinder-volume[7988]: 2019-05-22 10:19:36.913 7988 ERROR cinder.service [-] Manager for service cinder-volume infra1.os.mist.rd.bbc.co.uk@rbd_nvme is reporting problems, not sending heartbeat. Service will appear "down".
May 22 10:19:36 infra1.os.mist.rd.bbc.co.uk cinder-volume[7988]: 2019-05-22 10:19:36.917 7988 WARNING cinder.volume.manager [req-c5e58acc-465d-4f87-b7ec-5547b879261e - - - - -] Update driver status failed: (config name rbd_nvme) is uninitialized.
May 22 10:19:46 infra1.os.mist.rd.bbc.co.uk cinder-volume[7979]: 2019-05-22 10:19:46.751 7979 ERROR cinder.service [-] Manager for service cinder-volume infra1.os.mist.rd.bbc.co.uk@rbd_hdd is reporting problems, not sending heartbeat. Service will appear "down".
May 22 10:19:46 infra1.os.mist.rd.bbc.co.uk cinder-volume[7988]: 2019-05-22 10:19:46.913 7988 ERROR cinder.service [-] Manager for service cinder-volume infra1.os.mist.rd.bbc.co.uk@rbd_nvme is reporting problems, not sending heartbeat. Service will appear "down".
Restarting the cinder-volume service on the infra host brings the service back correctly, unfortunately there does not appear to be anything more useful in the log to determine the root cause.
I am seeing similar after re-running the setup-openstack playbooks after a rolling replacement upgrade of the control plane from xenial->bionic, with OSA rocky deployment.
The playbooks failed for cinder at this step https:/ /github. com/openstack/ openstack- ansible- os_cinder/ blob/f9aa4c5a30 f54e23665cad9c3 396c34d3b8a3287 /tasks/ cinder_ db_setup. yml#L42- L56
The resulting logs for cinder-volume
May 22 10:19:11 infra1. os.mist. rd.bbc. co.uk cinder- volume[ 7979]: 2019-05-22 10:19:11.768 7979 WARNING cinder. volume. manager [req-43f92ce9- 631a-4239- 949a-4b84b69c4e 01 - - - - -] Update driver status failed: (config name rbd_hdd) is uninitialized. os.mist. rd.bbc. co.uk cinder- volume[ 7979]: 2019-05-22 10:19:16.750 7979 ERROR cinder.service [-] Manager for service cinder-volume infra1. os.mist. rd.bbc. co.uk@rbd_ hdd is reporting problems, not sending heartbeat. Service will appear "down". os.mist. rd.bbc. co.uk cinder- volume[ 7988]: 2019-05-22 10:19:16.912 7988 ERROR cinder.service [-] Manager for service cinder-volume infra1. os.mist. rd.bbc. co.uk@rbd_ nvme is reporting problems, not sending heartbeat. Service will appear "down". os.mist. rd.bbc. co.uk cinder- volume[ 7979]: 2019-05-22 10:19:26.751 7979 ERROR cinder.service [-] Manager for service cinder-volume infra1. os.mist. rd.bbc. co.uk@rbd_ hdd is reporting problems, not sending heartbeat. Service will appear "down". os.mist. rd.bbc. co.uk cinder- volume[ 7988]: 2019-05-22 10:19:26.912 7988 ERROR cinder.service [-] Manager for service cinder-volume infra1. os.mist. rd.bbc. co.uk@rbd_ nvme is reporting problems, not sending heartbeat. Service will appear "down". os.mist. rd.bbc. co.uk cinder- volume[ 7979]: 2019-05-22 10:19:36.751 7979 ERROR cinder.service [-] Manager for service cinder-volume infra1. os.mist. rd.bbc. co.uk@rbd_ hdd is reporting problems, not sending heartbeat. Service will appear "down". os.mist. rd.bbc. co.uk cinder- volume[ 7988]: 2019-05-22 10:19:36.913 7988 ERROR cinder.service [-] Manager for service cinder-volume infra1. os.mist. rd.bbc. co.uk@rbd_ nvme is reporting problems, not sending heartbeat. Service will appear "down". os.mist. rd.bbc. co.uk cinder- volume[ 7988]: 2019-05-22 10:19:36.917 7988 WARNING cinder. volume. manager [req-c5e58acc- 465d-4f87- b7ec-5547b87926 1e - - - - -] Update driver status failed: (config name rbd_nvme) is uninitialized. os.mist. rd.bbc. co.uk cinder- volume[ 7979]: 2019-05-22 10:19:46.751 7979 ERROR cinder.service [-] Manager for service cinder-volume infra1. os.mist. rd.bbc. co.uk@rbd_ hdd is reporting problems, not sending heartbeat. Service will appear "down". os.mist. rd.bbc. co.uk cinder- volume[ 7988]: 2019-05-22 10:19:46.913 7988 ERROR cinder.service [-] Manager for service cinder-volume infra1. os.mist. rd.bbc. co.uk@rbd_ nvme is reporting problems, not sending heartbeat. Service will appear "down".
May 22 10:19:16 infra1.
May 22 10:19:16 infra1.
May 22 10:19:26 infra1.
May 22 10:19:26 infra1.
May 22 10:19:36 infra1.
May 22 10:19:36 infra1.
May 22 10:19:36 infra1.
May 22 10:19:46 infra1.
May 22 10:19:46 infra1.
Restarting the cinder-volume service on the infra host brings the service back correctly, unfortunately there does not appear to be anything more useful in the log to determine the root cause.