Comment 42 for bug 1023755

Revision history for this message
Stefan Bader (smb) wrote :

I have been experimenting with the devstack setup on a Xen guest (if possible I want to use Xen because I can take dumps there). The guest setup is 2VCPUs, 1G memory and 8G and 5G virtual disks (the second is mounted to /opt/stack to be used by the volume group of the test. This is a remainder of the other setup and I surely can reinstall with a single big enough disk. There is about 1G of swap-space used on the first disk.

Running the test, the snapshot is successfully created:

#> dmsetup ls --tree
stack--volumes-_snapshot--68081c3b--d18b--48ca--8f15--e14ec8b0606e (252:1)
 ├─stack--volumes-_snapshot--68081c3b--d18b--48ca--8f15--e14ec8b0606e-cow (2...
 │ └─ (7:0)
 └─stack--volumes-volume--37759b39--408e--4f2a--a58c--9b0adf9dfa43-real (252...
    └─ (7:0)
stack--volumes-volume--37759b39--408e--4f2a--a58c--9b0adf9dfa43 (252:0)
 └─stack--volumes-volume--37759b39--408e--4f2a--a58c--9b0adf9dfa43-real (252...
    └─ (7:0)

When deleting the snapshot, the system gets under pressure (and the timeout of 30s seems too short for erasing 2G). In contrast to what the lvm commands in comment #32 said, it is the snapshot volume that gets erased:

6652 pts/14 S+ 0:00 sudo /usr/local/bin/cinder-rootwrap /etc/cinder/rootwrap.conf dd if=/dev/zero of=/dev/mapper/stack--volumes-_snapshot--68081c3b--d18b--48ca--8f15--e14ec8b0606e count=2048 bs=1M

This actually makes a lot of sense because erasing the origin volume would cause much more IO in the background as every block that gets erased is first copied into the exception storage. By erasing the snapshot only the zeroed block gets written to the exception storage and (not sure how often) the persistent COW table has to be updated.

While for me the system does not hang, I notice that after the activity caused by the erase stops (and nova volume-snapshot-list now returns nothing), the device-mapper setup is not clean:

#> dmsetup ls --tree
stack--volumes-_snapshot--68081c3b--d18b--48ca--8f15--e14ec8b0606e-cow (252:...
 └─ (7:0)
stack--volumes-volume--37759b39--408e--4f2a--a58c--9b0adf9dfa43 (252:0)
 └─ (7:0)

#> lvs
  LV VG Attr LSize Origin Snap% Move Log Copy% Convert
  volume-37759b39-408e-4f2a-a58c-9b0adf9dfa43 stack-volumes -wi-ao 2.00g

So the snapshot was mostly disbanded but the exception storage is still present. Though I was able to remove it manually via dmsetup. After doing that, I could issue a "nova volume-delete" to get rid of the origin volume, too (note, this again does a dd but compared to the dd on the snapshot, this one seems much fater).