This bug affects us, and the proposed workaround setting the osd_class_dir worked.
Installed our system at bionic-ussuri (UCA), upgraded to focal, then continued all the way to focal-yoga (UCA). Ceph packages from UCA are currently version 17.2.0-0ubuntu0.22.04.1~cloud0.
Applying the osd_class_dir workaround and restarting ceph-mgr.target on the ceph-mons also got rid of this problem that appeared after upgrading to focal-yoga and Ceph Quincy:
$ sudo ceph status
cluster:
id: cb562c8a-787a-11ec-a57e-00163e0f587a
health: HEALTH_ERR
Module 'devicehealth' has failed: disk I/O error
This bug affects us, and the proposed workaround setting the osd_class_dir worked.
Installed our system at bionic-ussuri (UCA), upgraded to focal, then continued all the way to focal-yoga (UCA). Ceph packages from UCA are currently version 17.2.0- 0ubuntu0. 22.04.1~ cloud0.
Applying the osd_class_dir workaround and restarting ceph-mgr.target on the ceph-mons also got rid of this problem that appeared after upgrading to focal-yoga and Ceph Quincy:
$ sudo ceph status 787a-11ec- a57e-00163e0f58 7a
cluster:
id: cb562c8a-
health: HEALTH_ERR
Module 'devicehealth' has failed: disk I/O error