Openstack Details:
Base OS - Ubuntu 22.04 LTS
HW - 3 Compute (Baremetal), 1 Neutron (Baremetal), 3 Controllers (VMs)
Openstack Version - stable/2023.2
Problem Statement:
1. Installation - Success
2. VM Creation - Success
3. Reboot VM from inside VM with sudo reboot - Success
4. Reboot from Openstack CLI and Horizon (both soft and hard) - FAILURE, results in VM moving to error state.
Logs from Nova-Compute are provided below. Looks like its complaining about nvme multipath. I have multipathd enabled on globals.yml and disabled multipathd on OS layer( having it enabled results in multipath container not starting)
2024-01-17 03:42:53.184 7 INFO nova.compute.resource_tracker [None req-921cfff0-ed67-400f-b017-dbc001e34e0f - - - - - -] Compute node record created for chnjtpopenstackcompute03:chnjtpopenstackcompute03 with uuid: 021f4f77-18dc-4430-b75c-1170ab78f367
2024-01-17 03:42:53.707 7 INFO nova.scheduler.client.report [None req-921cfff0-ed67-400f-b017-dbc001e34e0f - - - - - -] [req-ee6f42d0-c13d-4a35-8673-0829abe42b63] Created resource provider record via placement API for resource provider with UUID 021f4f77-18dc-4430-b75c-1170ab78f367 and name chnjtpopenstackcompute03.
2024-01-17 03:42:53.768 7 INFO nova.virt.libvirt.host [None req-921cfff0-ed67-400f-b017-dbc001e34e0f - - - - - -] kernel doesn't support AMD SEV
2024-01-17 03:43:45.214 7 INFO nova.compute.manager [None req-2d869740-7bd4-4862-9c01-af6eddbeed08 - - - - - -] Running instance usage audit for host chnjtpopenstackcompute03 from 2024-01-17 02:00:00 to 2024-01-17 03:00:00. 0 instances.
2024-01-17 04:00:59.162 7 INFO nova.compute.manager [None req-2d869740-7bd4-4862-9c01-af6eddbeed08 - - - - - -] Running instance usage audit for host chnjtpopenstackcompute03 from 2024-01-17 03:00:00 to 2024-01-17 04:00:00. 0 instances.
2024-01-17 04:20:47.457 7 INFO nova.compute.claims [req-a2f7a560-c4af-4bc3-8feb-d87e6063b664 req-4c1ea611-7d5a-44d0-ac5a-aa02a5a92bf6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] [instance: ac309ea5-c8c3-44dd-a138-68c3475a706a] Claim successful on node chnjtpopenstackcompute03
2024-01-17 04:20:47.846 7 INFO nova.virt.osinfo [req-a2f7a560-c4af-4bc3-8feb-d87e6063b664 req-4c1ea611-7d5a-44d0-ac5a-aa02a5a92bf6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] Cannot load Libosinfo: (cannot import name Libosinfo, introspection typelib not found)
2024-01-17 04:20:47.869 7 INFO nova.virt.libvirt.driver [req-a2f7a560-c4af-4bc3-8feb-d87e6063b664 req-4c1ea611-7d5a-44d0-ac5a-aa02a5a92bf6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] [instance: ac309ea5-c8c3-44dd-a138-68c3475a706a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
2024-01-17 04:20:47.992 7 INFO nova.virt.block_device [req-a2f7a560-c4af-4bc3-8feb-d87e6063b664 req-4c1ea611-7d5a-44d0-ac5a-aa02a5a92bf6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] [instance: ac309ea5-c8c3-44dd-a138-68c3475a706a] Booting with volume-backed-image ee7ce773-7532-4fd8-acbc-d5068127eb75 at /dev/vda
2024-01-17 04:21:51.818 7 INFO oslo.privsep.daemon [req-a2f7a560-c4af-4bc3-8feb-d87e6063b664 req-4c1ea611-7d5a-44d0-ac5a-aa02a5a92bf6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmp9azy49p4/privsep.sock']
2024-01-17 04:21:52.680 7 INFO oslo.privsep.daemon [req-a2f7a560-c4af-4bc3-8feb-d87e6063b664 req-4c1ea611-7d5a-44d0-ac5a-aa02a5a92bf6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] Spawned new privsep daemon via rootwrap
2024-01-17 04:21:52.563 945 INFO oslo.privsep.daemon [-] privsep daemon starting
2024-01-17 04:21:52.567 945 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
2024-01-17 04:21:52.569 945 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
2024-01-17 04:21:52.569 945 INFO oslo.privsep.daemon [-] privsep daemon running as pid 945
2024-01-17 04:21:52.772 7 WARNING os_brick.initiator.connectors.nvmeof [req-a2f7a560-c4af-4bc3-8feb-d87e6063b664 req-4c1ea611-7d5a-44d0-ac5a-aa02a5a92bf6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] Could not find nvme_core/parameters/multipath: FileNotFoundError: [Errno 2] No such file or directory: '/sys/module/nvme_core/parameters/multipath'
2024-01-17 04:21:52.783 7 WARNING os_brick.initiator.connectors.nvmeof [req-a2f7a560-c4af-4bc3-8feb-d87e6063b664 req-4c1ea611-7d5a-44d0-ac5a-aa02a5a92bf6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] Process execution error in _get_host_uuid: Unexpected error while running command.
Command: blkid overlay -s UUID -o value
Exit code: 2
Stdout: ''
Stderr: '': oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command.
2024-01-17 04:21:53.367 7 INFO nova.virt.libvirt.driver [req-a2f7a560-c4af-4bc3-8feb-d87e6063b664 req-4c1ea611-7d5a-44d0-ac5a-aa02a5a92bf6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] [instance: ac309ea5-c8c3-44dd-a138-68c3475a706a] Creating image(s)
2024-01-17 04:21:53.416 7 WARNING os_brick.initiator.connectors.base [req-a2f7a560-c4af-4bc3-8feb-d87e6063b664 req-4c1ea611-7d5a-44d0-ac5a-aa02a5a92bf6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] Service needs to call os_brick.setup() before connecting volumes, if it doesn't it will break on the next release
2024-01-17 04:21:53.418 7 WARNING os_brick.initiator.connectors.base [req-a2f7a560-c4af-4bc3-8feb-d87e6063b664 req-4c1ea611-7d5a-44d0-ac5a-aa02a5a92bf6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] Service needs to call os_brick.setup() before connecting volumes, if it doesn't it will break on the next release
2024-01-17 04:21:53.419 7 INFO os_brick.initiator.connectors.iscsi [req-a2f7a560-c4af-4bc3-8feb-d87e6063b664 req-4c1ea611-7d5a-44d0-ac5a-aa02a5a92bf6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] Trying to connect to iSCSI portal XXX.XXX.XXX.179:3260
2024-01-17 04:21:53.444 7 WARNING os_brick.initiator.connectors.iscsi [req-a2f7a560-c4af-4bc3-8feb-d87e6063b664 req-4c1ea611-7d5a-44d0-ac5a-aa02a5a92bf6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] iscsiadm stderr output when getting sessions: iscsiadm: No active sessions.
2024-01-17 04:21:54.586 7 INFO oslo.privsep.daemon [req-a2f7a560-c4af-4bc3-8feb-d87e6063b664 req-4c1ea611-7d5a-44d0-ac5a-aa02a5a92bf6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp2t9vjwdm/privsep.sock']
2024-01-17 04:21:55.438 7 INFO oslo.privsep.daemon [req-a2f7a560-c4af-4bc3-8feb-d87e6063b664 req-4c1ea611-7d5a-44d0-ac5a-aa02a5a92bf6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] Spawned new privsep daemon via rootwrap
2024-01-17 04:21:55.325 973 INFO oslo.privsep.daemon [-] privsep daemon starting
2024-01-17 04:21:55.328 973 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
2024-01-17 04:21:55.330 973 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
2024-01-17 04:21:55.330 973 INFO oslo.privsep.daemon [-] privsep daemon running as pid 973
2024-01-17 04:21:55.651 7 INFO os_vif [req-a2f7a560-c4af-4bc3-8feb-d87e6063b664 req-4c1ea611-7d5a-44d0-ac5a-aa02a5a92bf6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:26:b1,bridge_name='br-int',has_traffic_filtering=True,id=4fc29a17-20b5-406e-b199-8eaacbbe2faf,network=Network(1c4591c7-8f11-4b24-bca3-a049b8587181),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fc29a17-20')
2024-01-17 04:21:56.109 7 INFO nova.compute.manager [None req-921cfff0-ed67-400f-b017-dbc001e34e0f - - - - - -] [instance: ac309ea5-c8c3-44dd-a138-68c3475a706a] VM Started (Lifecycle Event)
2024-01-17 04:21:56.170 7 INFO nova.compute.manager [None req-921cfff0-ed67-400f-b017-dbc001e34e0f - - - - - -] [instance: ac309ea5-c8c3-44dd-a138-68c3475a706a] VM Paused (Lifecycle Event)
2024-01-17 04:21:56.275 7 INFO nova.compute.manager [None req-921cfff0-ed67-400f-b017-dbc001e34e0f - - - - - -] [instance: ac309ea5-c8c3-44dd-a138-68c3475a706a] During sync_power_state the instance has a pending task (spawning). Skip.
2024-01-17 04:21:56.401 7 INFO nova.compute.manager [None req-921cfff0-ed67-400f-b017-dbc001e34e0f - - - - - -] [instance: ac309ea5-c8c3-44dd-a138-68c3475a706a] VM Resumed (Lifecycle Event)
2024-01-17 04:21:56.407 7 INFO nova.virt.libvirt.driver [-] [instance: ac309ea5-c8c3-44dd-a138-68c3475a706a] Instance spawned successfully.
2024-01-17 04:21:56.484 7 INFO nova.compute.manager [None req-921cfff0-ed67-400f-b017-dbc001e34e0f - - - - - -] [instance: ac309ea5-c8c3-44dd-a138-68c3475a706a] During sync_power_state the instance has a pending task (spawning). Skip.
2024-01-17 04:21:56.501 7 INFO nova.compute.manager [req-a2f7a560-c4af-4bc3-8feb-d87e6063b664 req-4c1ea611-7d5a-44d0-ac5a-aa02a5a92bf6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] [instance: ac309ea5-c8c3-44dd-a138-68c3475a706a] Took 3.14 seconds to spawn the instance on the hypervisor.
2024-01-17 04:21:56.625 7 INFO nova.compute.manager [req-a2f7a560-c4af-4bc3-8feb-d87e6063b664 req-4c1ea611-7d5a-44d0-ac5a-aa02a5a92bf6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] [instance: ac309ea5-c8c3-44dd-a138-68c3475a706a] Took 69.22 seconds to build instance.
2024-01-17 04:21:58.437 7 WARNING nova.compute.manager [req-50469c84-2443-4c58-9773-73b915ea49a3 req-b54ced82-04e5-4e30-902d-5edffeef81c0 61bedd9dbcc74235ba9724006b21e80f 57fb19e276404eb383feb5b63cc610f0 - - default default] [instance: ac309ea5-c8c3-44dd-a138-68c3475a706a] Received unexpected event network-vif-plugged-4fc29a17-20b5-406e-b199-8eaacbbe2faf for instance with vm_state active and task_state None.
2024-01-17 04:26:09.414 7 WARNING nova.compute.manager [req-1f066bf5-4c12-496b-9f92-1bc2eddbf438 req-5c25c648-afca-42c9-bbaf-2bf05841eff1 61bedd9dbcc74235ba9724006b21e80f 57fb19e276404eb383feb5b63cc610f0 - - default default] [instance: ac309ea5-c8c3-44dd-a138-68c3475a706a] Received unexpected event network-vif-unplugged-4fc29a17-20b5-406e-b199-8eaacbbe2faf for instance with vm_state active and task_state reboot_started_hard.
2024-01-17 04:26:09.606 7 INFO nova.virt.libvirt.driver [-] [instance: ac309ea5-c8c3-44dd-a138-68c3475a706a] Instance destroyed successfully.
2024-01-17 04:26:09.644 7 INFO os_vif [req-bc29d501-69ff-4b4c-928a-bd7409b755b2 req-269bdfc5-176f-403c-aa5c-3ff6fbd722b6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:26:b1,bridge_name='br-int',has_traffic_filtering=True,id=4fc29a17-20b5-406e-b199-8eaacbbe2faf,network=Network(1c4591c7-8f11-4b24-bca3-a049b8587181),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fc29a17-20')
2024-01-17 04:26:09.647 7 WARNING os_brick.initiator.connectors.base [req-bc29d501-69ff-4b4c-928a-bd7409b755b2 req-269bdfc5-176f-403c-aa5c-3ff6fbd722b6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] Service needs to call os_brick.setup() before connecting volumes, if it doesn't it will break on the next release
2024-01-17 04:26:09.830 7 INFO nova.virt.libvirt.host [req-bc29d501-69ff-4b4c-928a-bd7409b755b2 req-269bdfc5-176f-403c-aa5c-3ff6fbd722b6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] UEFI support detected
2024-01-17 04:26:09.886 7 WARNING os_brick.initiator.connectors.base [req-bc29d501-69ff-4b4c-928a-bd7409b755b2 req-269bdfc5-176f-403c-aa5c-3ff6fbd722b6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] Service needs to call os_brick.setup() before connecting volumes, if it doesn't it will break on the next release
2024-01-17 04:26:09.888 7 WARNING os_brick.initiator.connectors.base [req-bc29d501-69ff-4b4c-928a-bd7409b755b2 req-269bdfc5-176f-403c-aa5c-3ff6fbd722b6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] Service needs to call os_brick.setup() before connecting volumes, if it doesn't it will break on the next release
2024-01-17 04:26:09.889 7 INFO os_brick.initiator.connectors.iscsi [req-bc29d501-69ff-4b4c-928a-bd7409b755b2 req-269bdfc5-176f-403c-aa5c-3ff6fbd722b6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] Trying to connect to iSCSI portalXXX.XXX.XXX.179:3260
2024-01-17 04:26:09.914 7 WARNING os_brick.initiator.connectors.iscsi [req-bc29d501-69ff-4b4c-928a-bd7409b755b2 req-269bdfc5-176f-403c-aa5c-3ff6fbd722b6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] iscsiadm stderr output when getting sessions: iscsiadm: No active sessions.
2024-01-17 04:26:11.459 7 WARNING nova.compute.manager [req-d64b1fb3-ba39-4051-b899-04ccec0b3ccb req-ae83ebb5-dfde-4b57-8a10-a3328f3d476b 61bedd9dbcc74235ba9724006b21e80f 57fb19e276404eb383feb5b63cc610f0 - - default default] [instance: ac309ea5-c8c3-44dd-a138-68c3475a706a] Received unexpected event network-vif-plugged-4fc29a17-20b5-406e-b199-8eaacbbe2faf for instance with vm_state active and task_state reboot_started_hard.
2024-01-17 04:26:24.606 7 INFO nova.compute.manager [-] [instance: ac309ea5-c8c3-44dd-a138-68c3475a706a] VM Stopped (Lifecycle Event)
2024-01-17 04:28:34.504 7 WARNING os_brick.initiator.connectors.iscsi [req-bc29d501-69ff-4b4c-928a-bd7409b755b2 req-269bdfc5-176f-403c-aa5c-3ff6fbd722b6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] LUN 0 on iSCSI portalXXX.XXX.XXX.179:3260 not found on sysfs after logging in.
2024-01-17 04:28:34.577 7 WARNING os_brick.initiator.connectors.base [req-bc29d501-69ff-4b4c-928a-bd7409b755b2 req-269bdfc5-176f-403c-aa5c-3ff6fbd722b6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] Service needs to call os_brick.setup() before connecting volumes, if it doesn't it will break on the next release
2024-01-17 04:28:34.579 7 INFO os_brick.initiator.connectors.iscsi [req-bc29d501-69ff-4b4c-928a-bd7409b755b2 req-269bdfc5-176f-403c-aa5c-3ff6fbd722b6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] Trying to connect to iSCSI portalXXX.XXX.XXX.180:3260
2024-01-17 04:28:34.604 7 WARNING os_brick.initiator.connectors.iscsi [req-bc29d501-69ff-4b4c-928a-bd7409b755b2 req-269bdfc5-176f-403c-aa5c-3ff6fbd722b6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] iscsiadm stderr output when getting sessions: iscsiadm: No active sessions.
2024-01-17 04:30:57.410 7 WARNING os_brick.initiator.connectors.iscsi [req-bc29d501-69ff-4b4c-928a-bd7409b755b2 req-269bdfc5-176f-403c-aa5c-3ff6fbd722b6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] LUN 0 on iSCSI portalXXX.XXX.XXX.180:3260 not found on sysfs after logging in.
2024-01-17 04:30:58.485 7 WARNING os_brick.initiator.connectors.base [req-bc29d501-69ff-4b4c-928a-bd7409b755b2 req-269bdfc5-176f-403c-aa5c-3ff6fbd722b6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] Service needs to call os_brick.setup() before connecting volumes, if it doesn't it will break on the next release
2024-01-17 04:30:58.487 7 INFO os_brick.initiator.connectors.iscsi [req-bc29d501-69ff-4b4c-928a-bd7409b755b2 req-269bdfc5-176f-403c-aa5c-3ff6fbd722b6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] Trying to connect to iSCSI portalXXX.XXX.XXX.179:3260
2024-01-17 04:30:58.515 7 WARNING os_brick.initiator.connectors.iscsi [req-bc29d501-69ff-4b4c-928a-bd7409b755b2 req-269bdfc5-176f-403c-aa5c-3ff6fbd722b6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] iscsiadm stderr output when getting sessions: iscsiadm: No active sessions.
#######
# Manually mounted on Netapp Storage Box Layer, then it reboots. Else it keeps saying service is unable to find the LUN 0
#######
2024-01-17 04:32:31.568 7 INFO os_vif [req-bc29d501-69ff-4b4c-928a-bd7409b755b2 req-269bdfc5-176f-403c-aa5c-3ff6fbd722b6 c2f2573508144727a29afce9ee7bec60 17130a9cc6164234a3611451dc91543f - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:26:b1,bridge_name='br-int',has_traffic_filtering=True,id=4fc29a17-20b5-406e-b199-8eaacbbe2faf,network=Network(1c4591c7-8f11-4b24-bca3-a049b8587181),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fc29a17-20')
2024-01-17 04:32:31.796 7 WARNING nova.compute.manager [req-58b85f07-1451-4777-905a-58e2c5609cb9 req-b3699090-8d21-4ab2-8f39-47973a92ae4c 61bedd9dbcc74235ba9724006b21e80f 57fb19e276404eb383feb5b63cc610f0 - - default default] [instance: ac309ea5-c8c3-44dd-a138-68c3475a706a] Received unexpected event network-vif-plugged-4fc29a17-20b5-406e-b199-8eaacbbe2faf for instance with vm_state active and task_state reboot_started_hard.
2024-01-17 04:32:31.849 7 INFO nova.compute.manager [None req-921cfff0-ed67-400f-b017-dbc001e34e0f - - - - - -] [instance: ac309ea5-c8c3-44dd-a138-68c3475a706a] VM Resumed (Lifecycle Event)
2024-01-17 04:32:31.855 7 INFO nova.virt.libvirt.driver [-] [instance: ac309ea5-c8c3-44dd-a138-68c3475a706a] Instance rebooted successfully.
2024-01-17 04:32:31.917 7 INFO nova.compute.manager [None req-921cfff0-ed67-400f-b017-dbc001e34e0f - - - - - -] [instance: ac309ea5-c8c3-44dd-a138-68c3475a706a] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
2024-01-17 04:32:31.918 7 INFO nova.compute.manager [None req-921cfff0-ed67-400f-b017-dbc001e34e0f - - - - - -] [instance: ac309ea5-c8c3-44dd-a138-68c3475a706a] VM Started (Lifecycle Event)
2024-01-17 04:32:33.825 7 WARNING nova.compute.manager [req-650293ea-0780-4049-9e58-fa7ee463085d req-a9d8296f-2cd3-47c6-ad11-85f757519175 61bedd9dbcc74235ba9724006b21e80f 57fb19e276404eb383feb5b63cc610f0 - - default default] [instance: ac309ea5-c8c3-44dd-a138-68c3475a706a] Received unexpected event network-vif-plugged-4fc29a17-20b5-406e-b199-8eaacbbe2faf for instance with vm_state active and task_state None.
If I create a bootable volume first and then use that to spin up the instance, the instance reboots successfully when the command is issued via openstack cli.
During this process, the multipath mount disappears for some time and re-appears.
---
But when I create the instance with Image, the multipath goes to failed state and its stuck there for a long time.
3600a0980383138 73772b566c61313 136 dm-1 NETAPP,LUN C-Mode 'service- time 0' prio=0 status=enabled 'service- time 0' prio=0 status=enabled
size=31G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy=
| `- 15:0:0:0 sdj 8:144 failed faulty running
`-+- policy=
`- 14:0:0:0 sdi 8:128 failed faulty running