Follow up - it does seem to be the tmpfs mount that activate creates that causes the problem.
I manually started the activate container by running the podman command from unit.run for the activate step, but just ran "bash -l" instead of the actual activate command
Then I prevented the mount tmpfs from doing anything by "rm /usr/bin/mount" and replacing it with a link to "/usr/bin/true", and then ran the original activate command
Because the tmpfs was now effectively a no-op, this activation created the necessary files in the real OSD directory, and now I was able to systemctl restart the osd service and now it came up apparently OK.
I also did another fresh install on same hardware using normal non-ZFS root, and this problem did not happen, so it does in some way appear to be an interaction with ZFS.
Follow up - it does seem to be the tmpfs mount that activate creates that causes the problem.
I manually started the activate container by running the podman command from unit.run for the activate step, but just ran "bash -l" instead of the actual activate command
Then I prevented the mount tmpfs from doing anything by "rm /usr/bin/mount" and replacing it with a link to "/usr/bin/true", and then ran the original activate command
# /usr/sbin/ ceph-volume lvm activate 2 56b13799- 3ef5-4ea5- 91d5-474f829f12 dc --no-systemd
Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ ceph/osd/ ceph-2 <<< WHY DOES IT DO THIS? ceph/osd/ ceph-2 ceph-bluestore- tool --cluster=ceph prime-osd-dir --dev /dev/ceph- 6fc7e3e3- 2ce6-47ab- aac8-adc5c6633d fb/osd- block-56b13799- 3ef5-4ea5- 91d5-474f829f12 dc --path /var/lib/ ceph/osd/ ceph-2 --no-mon-config 6fc7e3e3- 2ce6-47ab- aac8-adc5c6633d fb/osd- block-56b13799- 3ef5-4ea5- 91d5-474f829f12 dc /var/lib/ ceph/osd/ ceph-2/ block ceph/osd/ ceph-2/ block ceph--6fc7e3e3- -2ce6-- 47ab--aac8- -adc5c6633dfb- osd--block- -56b13799- -3ef5-- 4ea5--91d5- -474f829f12dc ceph/osd/ ceph-2
Running command: /usr/bin/chown -R ceph:ceph /var/lib/
Running command: /usr/bin/
Running command: /usr/bin/ln -snf /dev/ceph-
Running command: /usr/bin/chown -h ceph:ceph /var/lib/
Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/
Running command: /usr/bin/chown -R ceph:ceph /var/lib/
--> ceph-volume lvm activate successful for osd ID: 2
Because the tmpfs was now effectively a no-op, this activation created the necessary files in the real OSD directory, and now I was able to systemctl restart the osd service and now it came up apparently OK.
I also did another fresh install on same hardware using normal non-ZFS root, and this problem did not happen, so it does in some way appear to be an interaction with ZFS.