Comment 26 for bug 1070539

Revision history for this message
Pádraig Brady (p-draigbrady) wrote : Re: create_lvm_image allocates dirty blocks

tl;dr...

I suggest adding iflag=direct oflag=direct to the first dd, and
changing the second dd to: shred -n0 -z -s$zero_remainder $path

Details...

I still don't think the second dd invocation is correct,
as the skip= parameter is a block count, whereas you intend it
to be a byte count. Also you want a seek= anyway, not a skip.

You could replace the second dd invocation with something like:

 sh -c "(dd seek=$zero_remainder_start bs=1 count=0 &&
         dd bs=$zero_remainder count=1) </dev/zero >$path"

Note newer versions of dd support iflag=count_bytes,
so this could all be done in a single invocation like
the following, but we can't rely on this being available:

 dd iflag=count_bytes if=/dev/zero count=vol_size bs=4MiB of=$path

Now the sh -c "dd ..." above is quite awkward, so since this
is only for the slop at the end of the volume, it's probably
better just to do this (though it might need a new rootwrap :()

  "shred -n0 -z -s$zero_remainder"

Performance notes...

I suppose these can be looked at later, I'm just noting
them now when it's a bit fresh in my mind.

1. Since we're usually dd'in on an image after LV creation,
it's probably sufficient to only zero the difference between
image size and the volume size.

2. Related to the previous, that only applies for non
generated images, but the point holds that we may be
better able to optimize (avoid some) zeroing, if done
in the create path, rather than the delete path?

3. TBH I don't understand the sparse option to create_lvm_image()
but the underlying lvcreate --virtualsize might be leveraged
to automatically provide zeros for unwritten parts?

4. I would add iflag=direct oflag=direct to the initial
dd invocation, to minimize kernel processing and at least
be consistent with what's done in cinder for volume clearing.

5. In many situations you wouldn't want the overhead involved
(i.e. you dn't need the security), and so would also like to
configure this away along the lines of:
https://review.openstack.org/#/c/12521/