VMAX initialize_conn failure when concurrent terminate req for different volume
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
Fix Released
|
Medium
|
Xing Yang |
Bug Description
This concurrency issue has the following pieces:
* A storage masking group with a single volume - Vol1
* An initialize connection request is in process for the masking view associated with the storage group above. The request is to add a second volume to the masking - Vol2.
* A terminate_
* The initialize_
* The terminate_
* Now the initialize connection continues processing but blows up when it tries to add Vol2 to the now non-existent storage group.
So the use case here is concurrent cloud-based deployments when a nova host just begins to be utilized, and a new VM is deployed around the same time the only current existing VM is deleted from the host.
It seems like initialize_ and terminate_ need to introduce some level of locking surrounding the masking view/group name such that terminate_ will not remove those entities unless the SG is both empty AND not locked.
description: | updated |
tags: | added: drivers |
tags: | added: emc |
Changed in cinder: | |
assignee: | nobody → Xing Yang (xing-yang) |
tags: | added: vmax |
Changed in cinder: | |
milestone: | none → kilo-3 |
importance: | Undecided → Medium |
status: | New → In Progress |
Changed in cinder: | |
status: | Fix Committed → Fix Released |
Changed in cinder: | |
milestone: | kilo-3 → 2015.1.0 |
I wonder if this problem still exists with all the bug fixes we submitted lately. Locking could potentially introduce performance problems and other issues. We are actually trying to get rid of locks in Cinder. We will investigate this issue.