Hello everybody, Sorry to respond late. Just returned to work from a recent event. In my case I have 5 disks (is listed in the logs bundle i previously submited phys_hw.txt): 1x nvme 1T for OS 4x kioxia ssd (1x reserved (destinated) for cinder and 3x for micro.ceph). nova-scheduler_nova-0.log shows: multiple tranches of: 2023-10-05T17:58:53.673Z [nova-scheduler] Modules with known eventlet monkey patching issues were imported prior to eventlet monkey patching: urllib3. This warning can usually be ignored if the caller is only importing and not executing nova code. 2023-10-05T17:58:54.084Z [nova-scheduler] 2023-10-05 17:58:54.082 143 CRITICAL nova [None req-6da8ed32-f895-40d7-bd40-fd6a001429a8 - - - - - -] Unhandled error: sqlalchemy.exc.ProgrammingError: (pymysql.err.ProgrammingError) (1146, "Table 'nova_api.cell_mappings' doesn't exist") with referece of nova_api.cell_mappings table does not exists. cinder-scheduler_cinder-0.log shows: 2023-10-05T14:24:51.110Z [pebble] Service "cinder-scheduler" starting: cinder-scheduler --use-syslog 2023-10-05T14:24:51.838Z [cinder-scheduler] 2023-10-05 14:24:51.835 15 CRITICAL cinder [None req-a9a9208a-5dd5-458a-9f4f-71d418e60405 - - - - - -] Unhandled error: sqlalchemy.exc.ProgrammingError: (pymysql.err.ProgrammingError) (1146, "Table 'cinder.services' doesn't exist") 2023-10-05T14:24:51.838Z [cinder-scheduler] [SQL: SELECT services.created_at AS services_created_at, services.deleted_at AS services_deleted_at, services.deleted AS services_deleted, services.id AS services_id, services.uuid AS services_uuid, services.cluster_name AS services_cluster_name, services.host AS services_host, services.`binary` AS services_binary, services.updated_at AS services_updated_at, services.topic AS services_topic, services.report_count AS services_report_count, services.disabled AS services_disabled, services.availability_zone AS services_availability_zone, services.disabled_reason AS services_disabled_reason, services.modified_at AS services_modified_at, services.rpc_current_version AS services_rpc_current_version, services.object_current_version AS services_object_current_version, services.replication_status AS services_replication_status, services.active_backend_id AS services_active_backend_id, services.frozen AS services_frozen as well as cinder-volume_cinder-ceph-0.log shows: 2023-10-05T14:24:51.110Z [pebble] Service "cinder-scheduler" starting: cinder-scheduler --use-syslog 2023-10-05T14:24:51.838Z [cinder-scheduler] 2023-10-05 14:24:51.835 15 CRITICAL cinder [None req-a9a9208a-5dd5-458a-9f4f-71d418e60405 - - - - - -] Unhandled error: sqlalchemy.exc.ProgrammingError: (pymysql.err.ProgrammingError) (1146, "Table 'cinder.services' doesn't exist") 2023-10-05T14:24:51.838Z [cinder-scheduler] [SQL: SELECT services.created_at AS services_created_at, services.deleted_at AS services_deleted_at, services.deleted AS services_deleted, services.id AS services_id, services.uuid AS services_uuid, services.cluster_name AS services_cluster_name, services.host AS services_host, services.`binary` AS services_binary, services.updated_at AS services_updated_at, services.topic AS services_topic, services.report_count AS services_report_count, services.disabled AS services_disabled, services.availability_zone AS services_availability_zone, services.disabled_reason AS services_disabled_reason, services.modified_at AS services_modified_at, services.rpc_current_version AS services_rpc_current_version, services.object_current_version AS services_object_current_version, services.replication_status AS services_replication_status, services.active_backend_id AS services_active_backend_id, services.frozen AS services_frozen with reference of Cinder.services table does not exists. When using multipass to perform sunbeam bootstrap, usually will succeed. With kvm vm, sometimes fail. With physical hw, fails in every attempt. (with tests performed during 2023-09-07 / 2023-10-05).