[backport] Optional storage clear on delete
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
Won't Fix
|
High
|
John Griffith |
Bug Description
The storage clear on delete has significant I/O impact
https:/
On large volume deletion you had to wait too many before you can use your storage space again.
On systems with limited I/O performance , like in the OpenStack gate (test) system it has significant impact on performance and reliability (test timing).
In private clouds where you consider all data in the volume group as your data, you do not need to sanitize the storage.
Please make this method call optionally configurable (default ON), even with a lot of warning.
NOTE:
The device-mapper thin provisioning layer is able to sanitize chunks just on the first access, in this case you do not need to "manually" sanitize on delete. It requires at least 3.2 Linux kernel version, and you can use it with the dmsetup command.
Newer lvm2 packages support it by the lvcreate command too.
Update:
It is configurable in the master branch, but not in the stable/folsom
description: | updated |
summary: |
- Optional storage clear on delete + [backport] Optional storage clear on delete |
description: | updated |
Changed in cinder: | |
status: | Invalid → New |
Changed in cinder: | |
assignee: | nobody → John Griffith (john-griffith) |
status: | New → Confirmed |
importance: | Undecided → High |
The default should probably still be ON.
What granularity can we have for this config option? Globally makes little sense, but per tenant may not be feasible as well, as tenants may be sharing storage space, no?
Can you expand on the LVM capability?
(Also long term, the storage should do this for us. I hope there's a Cinder API for it).