I also encountered the issue of not being able to detach the volume. At first, I thought I had reproduced this problem, but it turned out not to be the case.
The symptom is that, launch a VM boot from a ceph volume, the volume will be deleted when delete the VM. When I delete the instance, the volume is still in "in-use" state.
nova-compute.log
ERROR nova.volume.cinder Error: The server could not comply with the request since it is either malformed or otherwise incorrect. (HTTP 406) (Request-ID: req-759ad2ed-22f9-4286-81ba-2b543d089b41) Code: 406: cinderclient.exceptions.NotAcceptable: The server could not comply with the request since it is either malformed or otherwise incorrect. (HTTP 406) (Request-ID: req-759ad2ed-22f9-4286-81ba-2b543d089b41)
WARNING nova.compute.manager [instance: 04544489-dfd2-4c0c-b8c8-a07acbee2b58] Ignoring unknown cinder exception for volume d0a58ecc-e63a-49e4-8785-fb34d113e0f2: The server could not comply with the request since it is either malformed or otherwise incorrect. (HTTP 406) (Request-ID: req-759ad2ed-22f9-4286-81ba-2b543d089b41): cinderclient.exceptions.NotAcceptable: The server could not comply with the request since it is either malformed or otherwise incorrect. (HTTP 406) (Request-ID: req-759ad2ed-22f9-4286-81ba-2b543d089b41)
deploy a env with focal+ victoria
cinder pkg version in cinder-ceph units: 2:17.4.0-0ubuntu1~cloud4
nova pkg version in nova-compute units: 2:22.4.0-0ubuntu1~cloud4
result: I can NOT reproduce the issue
Test 2:
deploy a env with focal+ victoria
2 nova-compute units are with different nova pkg versions, one is with 2:22.4.0-0ubuntu1~cloud4, the other is with 2:22.4.0-0ubuntu1~cloud3
cinder pkg version in cinder-ceph units: 2:17.4.0-0ubuntu1~cloud3
Launch a VM on each nova-compute node.
Result: I can reproduce the issue for both VMs
Test 3:
deploy a env with focal+ victoria
2 nova compute with different nova pkg versions, one is with 2:22.4.0-0ubuntu1~cloud4, the other is with 2:22.4.0-0ubuntu1~cloud3
cinder pkg version in cinder-ceph units: 2:17.4.0-0ubuntu1~cloud4
Launch a VM on each nova-compute node.
Result: I can NOT reproduce the issue for both VMs
Thus, my conclusion is that the issue is caused by the Cinder version 2:17.4.0-0ubuntu1~cloud3 and has nothing to do with the Nova version.
I also encountered the issue of not being able to detach the volume. At first, I thought I had reproduced this problem, but it turned out not to be the case.
The symptom is that, launch a VM boot from a ceph volume, the volume will be deleted when delete the VM. When I delete the instance, the volume is still in "in-use" state.
nova-compute.log
ERROR nova.volume.cinder Error: The server could not comply with the request since it is either malformed or otherwise incorrect. (HTTP 406) (Request-ID: req-759ad2ed- 22f9-4286- 81ba-2b543d089b 41) Code: 406: cinderclient. exceptions. NotAcceptable: The server could not comply with the request since it is either malformed or otherwise incorrect. (HTTP 406) (Request-ID: req-759ad2ed- 22f9-4286- 81ba-2b543d089b 41)
WARNING nova.compute. manager [instance: 04544489- dfd2-4c0c- b8c8-a07acbee2b 58] Ignoring unknown cinder exception for volume d0a58ecc- e63a-49e4- 8785-fb34d113e0 f2: The server could not comply with the request since it is either malformed or otherwise incorrect. (HTTP 406) (Request-ID: req-759ad2ed- 22f9-4286- 81ba-2b543d089b 41): cinderclient. exceptions. NotAcceptable: The server could not comply with the request since it is either malformed or otherwise incorrect. (HTTP 406) (Request-ID: req-759ad2ed- 22f9-4286- 81ba-2b543d089b 41)
apache log
"DELETE /v3/c1524133a19 945fc9f59708819 277bc9/ attachments/ aa8bc30a- b22b-409c- 8472-351b73fdd1 ea HTTP/1.1" 406 5462 "-" "python- cinderclient"
Regarding this issue, I conducted more tests.
Test 1:
deploy a env with focal+ victoria 0-0ubuntu1~ cloud4 0-0ubuntu1~ cloud4
cinder pkg version in cinder-ceph units: 2:17.4.
nova pkg version in nova-compute units: 2:22.4.
result: I can NOT reproduce the issue
Test 2:
deploy a env with focal+ victoria 0-0ubuntu1~ cloud4, the other is with 2:22.4. 0-0ubuntu1~ cloud3 0-0ubuntu1~ cloud3
2 nova-compute units are with different nova pkg versions, one is with 2:22.4.
cinder pkg version in cinder-ceph units: 2:17.4.
Launch a VM on each nova-compute node.
Result: I can reproduce the issue for both VMs
Test 3:
deploy a env with focal+ victoria 0-0ubuntu1~ cloud4, the other is with 2:22.4. 0-0ubuntu1~ cloud3 0-0ubuntu1~ cloud4
2 nova compute with different nova pkg versions, one is with 2:22.4.
cinder pkg version in cinder-ceph units: 2:17.4.
Launch a VM on each nova-compute node.
Result: I can NOT reproduce the issue for both VMs
Thus, my conclusion is that the issue is caused by the Cinder version 2:17.4. 0-0ubuntu1~ cloud3 and has nothing to do with the Nova version.