shredding LVM volumes may affect performance of other VMs on compute host

Bug #1835201 reported by Pavlo Shchelokovskyy
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Confirmed
Wishlist
Unassigned

Bug Description

When configured with LVM local storage for ephemeral partitions, Nova performs wiping operation using `shred` before removing volume once a VM is deleted.
`shred` consumes a lot of CPU and almost all disk bandwidth (even on SSDs) which drastically affects the performance of other VMs running on the same Compute.
The bigger volume size is specified in a flavor the longer this operation takes and VMs suffer.

Would be nice to be able to specify the ionice level for this operation (like `ionice -c3` to use the Idle I/O Scheduler).

Tags: libvirt lvm
Revision history for this message
Matt Riedemann (mriedem) wrote :

Cinder has a config option for specifying ionice:

https://github.com/openstack/cinder/blob/db88f0a15e75b01c0073fb4d1850420d797b9a3b/cinder/volume/driver.py#L78

I could see the same thing being added to nova.

tags: added: libvirt lvm
Changed in nova:
status: New → Confirmed
importance: Undecided → Wishlist
Revision history for this message
Matt Riedemann (mriedem) wrote :

@Pavlo, feel free to push a patch for this with a simple test and a feature release note:

https://docs.openstack.org/nova/latest/contributor/releasenotes.html

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.