os_brick.exception.VolumeDeviceNotFound
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
os-brick |
Fix Committed
|
Undecided
|
Unassigned |
Bug Description
Running OpenStack 2023.2 with Cinder and PureStorage Flash Array as the backend. I am leveraging NVMe/TCP as the protocol. I am able to create a blank volume on the FA through Horizon. If I mark that volume as bootable and create an instance with it, or create a volume with an image, the volumes fail.
The most prevalent error I get is: "raise exception.
When the workflow kicks off, I can see the volume mounted on the host.
nvme list
Node Generic SN Model Namespace Usage Format FW Rev
-------
/dev/nvme0n1 /dev/ng0n1 0EC012D099B841EB Pure Storage FlashArray 0x3f 1.07 GB / 1.07 GB 512 B + 0 B 6.5.1
ls /dev/disk/
/dev/disk/
I tried turning on debugging and verbose loggin to see the commands being sent to help understand where the process is failing, but unable to see that.
Changed in os-brick: | |
status: | New → Fix Committed |
Afdter looking at the logs I see a number of
ERROR os_brick. initiator. connectors. nvmeof [None req-09f091dc- 0934-4609- 8e7e-cfdca70d5f ea 268785dd8279483 9ba2ff15fc962c0 a6 a17cd616f3094b8 9bbb0a201a8a8a9 b2 - - - -] Could not connect to Portal tcp at 10.136.194.68:4420 (ctrl: None): exit_code: 1, stdout: "", stderr: "already connected#012",: oslo_concurrenc y.processutils. ProcessExecutio nError: Unexpected error while running command.
These look to be the root cause, as the Pure cinder driver is doing exactly what it is supposed to be doing, with regards to managing the volume and host connections for the FlashArray on the host.