Comment 8 for bug 1881747

Revision history for this message
Martin Strange (mstrange) wrote :

Follow up - it does seem to be the tmpfs mount that activate creates that causes the problem.

I manually started the activate container by running the podman command from unit.run for the activate step, but just ran "bash -l" instead of the actual activate command

Then I prevented the mount tmpfs from doing anything by "rm /usr/bin/mount" and replacing it with a link to "/usr/bin/true", and then ran the original activate command

# /usr/sbin/ceph-volume lvm activate 2 56b13799-3ef5-4ea5-91d5-474f829f12dc --no-systemd

Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2 <<< WHY DOES IT DO THIS?
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-6fc7e3e3-2ce6-47ab-aac8-adc5c6633dfb/osd-block-56b13799-3ef5-4ea5-91d5-474f829f12dc --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Running command: /usr/bin/ln -snf /dev/ceph-6fc7e3e3-2ce6-47ab-aac8-adc5c6633dfb/osd-block-56b13799-3ef5-4ea5-91d5-474f829f12dc /var/lib/ceph/osd/ceph-2/block
Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--6fc7e3e3--2ce6--47ab--aac8--adc5c6633dfb-osd--block--56b13799--3ef5--4ea5--91d5--474f829f12dc
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
--> ceph-volume lvm activate successful for osd ID: 2

Because the tmpfs was now effectively a no-op, this activation created the necessary files in the real OSD directory, and now I was able to systemctl restart the osd service and now it came up apparently OK.

I also did another fresh install on same hardware using normal non-ZFS root, and this problem did not happen, so it does in some way appear to be an interaction with ZFS.