There is the `LVM-over-iSCSI' Cinder driver that allows OpenStack VM to use Logical Volumes over the iSCSI bus. In one of the implementation this uses `tgtd' SCSI provider with `iscsiadm' target configuration utility. The `tgtd' then uses a logical Logical Volume as a backing device that indeed stores the data.
By default, Cinder instructs `tgtd' to open backing device with the page caching enabled.
While this theoretically speeds up I/O it also introduces the following issues:
1. Over-reliance from the VM on the data storage. VM can assume that the data is written while it is still in the cache on the `tgtd' side. Sudden power-off or other failure can break the data.
2. Double-caching of the data: VM can have filesystem and other caches itself. This virtually moves data cache control outside of the VM. This is the derivative of the above.
3. A VM under QEMU hangs up during the `fdatasync' syscall made on the Cinder-provided devices as the consequence of the above. This is due to the flushing of the enormous page cache. See [1] for details.
Cons are the following:
1. `O_DIRECT' mode is not supported by all the devices and there is no good API for it. However, since we use LVM as backing devices we can be sure that this requirement is met.
2. Performance is still under the question. We have to investigate this fully but our intermediate results favour `O_DIRECT' over `O_SYNC' and write-through caches.
[1] https://bugs.launchpad.net/mos/6.1.x/+bug/1375245
This has come up on and off over the years and both could be argued as "correct". I think the best option here may be to provide this as an option via config. Leaving the default as is, but providing the ability to change it. Honestly, I thought we already did this but looking back at config file not seeing it. Maybe an enhancement we could add?