Deleting a volume uses "dd" which higher up too much the load
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
Fix Released
|
Undecided
|
Pádraig Brady |
Bug Description
Dear stackers,
Env : DIablo 2011.3
nova-volume/ network/ api/ scheduler/ <------> nova-compute #1
I just noticed for the second time that removing a volume via nova volume-delete does a "dd" first in order to erase the content.
That dd uses as cpu as possible, making the instances unreacheable - the python process having a hard time working (nova-network especially) Once the dd is complete, and load back to normal, all my instances are reacheable.
at the same time, instance availability :
http://
Is it possible to add a "nice" for the dd process, so we make sure the process doesn't phagocyte too much the CPU ?
thanks
Razique
Changed in nova: | |
status: | New → Incomplete |
Changed in nova: | |
milestone: | folsom-3 → folsom-rc1 |
Changed in nova: | |
status: | Fix Committed → Fix Released |
Changed in nova: | |
milestone: | folsom-rc1 → 2012.2 |
You mention CPU as the bottleneck, but that would surprise me with that dd command.
Is perhaps disk the bottleneck? What type of backing storage do you have? What size are the volumes?
From the graph I'm guessing dd uses 35% of your CPU, while saturating the disk writing zeros?
Perhaps dd is also blowing away the buffer cache while doing this,
thus causing more disk thrashing?
Does the attached patch avoid the issue in any way? python/ ...../nova/ volume/ driver. py
You'll need to apply that as root to /usr/lib/
(I'd delete any driver.py[co] files there too just in case),
and restart the nova volume service