Activity log for bug #1195947

Date Who What changed Old value New value Message
2013-06-29 03:19:47 wingwj bug added bug
2013-06-29 03:19:47 wingwj attachment added VM-BDM error-report.docx https://bugs.launchpad.net/bugs/1195947/+attachment/3717766/+files/VM-BDM%20error-report.docx
2013-06-29 04:22:41 wingwj attachment added patch for this bug~ https://bugs.launchpad.net/nova/+bug/1195947/+attachment/3717781/+files/manager.py
2013-06-29 04:34:02 wingwj attachment added Patch Test.docx https://bugs.launchpad.net/nova/+bug/1195947/+attachment/3717782/+files/Patch%20Test.docx
2013-06-29 09:14:59 wingwj attachment added My Patch~ https://bugs.launchpad.net/nova/+bug/1195947/+attachment/3717853/+files/0001-VM-re-scheduler-mechanism-will-cause-BDM-volumes-con.patch
2013-07-17 09:32:20 wingwj nova: assignee wingwj (wingwj)
2013-07-22 12:52:54 Nikola Đipanov nova: status New Incomplete
2013-07-22 15:07:49 Nikola Đipanov nova: status Incomplete Confirmed
2013-07-22 17:08:13 Vish Ishaya nova: importance Undecided High
2013-07-22 17:08:53 Vish Ishaya tags bdm createvm bdm createvm folsom-backport-potential grizzly-backport-potential
2013-08-19 03:33:47 OpenStack Infra nova: status Confirmed In Progress
2013-08-21 16:45:18 wingwj description We can create VM1 with BDM-volumes (for example, one volume we called it “Vol-1”). But when the attached-volume (Vol-1..) involved in BDM parameters to create a new VM2, due to VM re-scheduler mechanism, the volume will change to attach on the new VM2 in Nova & Cinder, instead of raise an “InvalidVolume” exception of “Vol-1 is already attached on VM1”. In actually, Vol-1 both attached on VM1 and VM2 on hypervisor. But when you operate Vol-1 on VM1, you can’t see any corresponding changes on VM2… I reproduced it and wrote in the doc. Please check the attachment for details~ ------------------------- I checked on the Nova codes, the problem is caused by VM re-scheduler mechanism: Now Nova will check the state of BDM-volumes from Cinder now [def _setup_block_device_mapping() in manager.py]. If any state is “in-use”, this request will fail, and trigger VM re-scheduler. According to existing processes in Nova, before VM re-scheduler, it will shutdown VM and detach all BDM-volumes in Cinder for rollback [def _shutdown_instance() in manager.py]. As the result, the state of Vol-1 will change from “in-use” to “available” in Cinder. But, there’re nothing detach-operations on the Nova side… Therefore, after re-scheduler, it will pass the BDM-volumes checking in creating VM2 on the second time, and all VM1’s BDM-volumes (Vol-1) will be possessed by VM2 and are recorded in Nova & Cinder DB. But Vol-1 is still attached on VM1 on hypervisor, and will also attach on VM2 after VM creation success… --------------- Moreover, the problem mentioned-above will occur when “delete_on_termination” of BDMs is “False”. If the flag is “True”, all BDM-volumes will be deleted in Cinder because the states are already changed from “in-use” to “available” before [def _cleanup_volumes() in manager.py]. (P.S. Success depends on the specific implementation of Cinder Driver) Thanks~ Due to re-scheduler mechanism, when a user tries to create (in error) an instance using a volume which is already in use by another instance, the error is correctly detected, but the recovery code will incorrectly affect the original instance. Need to raise exception directly when the situation above occurred. ------------------------ ------------------------ We can create VM1 with BDM-volumes (for example, one volume we called it “Vol-1”). But when the attached-volume (Vol-1..) involved in BDM parameters to create a new VM2, due to VM re-scheduler mechanism, the volume will change to attach on the new VM2 in Nova & Cinder, instead of raise an “InvalidVolume” exception of “Vol-1 is already attached on VM1”. In actually, Vol-1 both attached on VM1 and VM2 on hypervisor. But when you operate Vol-1 on VM1, you can’t see any corresponding changes on VM2… I reproduced it and wrote in the doc. Please check the attachment for details~ ------------------------- I checked on the Nova codes, the problem is caused by VM re-scheduler mechanism: Now Nova will check the state of BDM-volumes from Cinder now [def _setup_block_device_mapping() in manager.py]. If any state is “in-use”, this request will fail, and trigger VM re-scheduler. According to existing processes in Nova, before VM re-scheduler, it will shutdown VM and detach all BDM-volumes in Cinder for rollback [def _shutdown_instance() in manager.py]. As the result, the state of Vol-1 will change from “in-use” to “available” in Cinder. But, there’re nothing detach-operations on the Nova side… Therefore, after re-scheduler, it will pass the BDM-volumes checking in creating VM2 on the second time, and all VM1’s BDM-volumes (Vol-1) will be possessed by VM2 and are recorded in Nova & Cinder DB. But Vol-1 is still attached on VM1 on hypervisor, and will also attach on VM2 after VM creation success… --------------- Moreover, the problem mentioned-above will occur when “delete_on_termination” of BDMs is “False”. If the flag is “True”, all BDM-volumes will be deleted in Cinder because the states are already changed from “in-use” to “available” before [def _cleanup_volumes() in manager.py]. (P.S. Success depends on the specific implementation of Cinder Driver) Thanks~
2013-10-31 17:34:23 Russell Bryant nominated for series nova/havana
2013-10-31 17:34:23 Russell Bryant bug task added nova/havana
2013-10-31 17:38:55 Nikola Đipanov nova/havana: importance Undecided Critical
2013-10-31 17:39:00 Nikola Đipanov nova/havana: importance Critical High
2013-12-05 02:13:24 hougangliu bug added subscriber hougangliu
2013-12-05 08:00:17 Yogev Rabl bug added subscriber Yogev Rabl
2013-12-07 02:02:38 Alan Pevec nova/havana: status New In Progress
2013-12-07 02:05:30 Alan Pevec nova/havana: assignee Nikola Đipanov (ndipanov)
2013-12-10 01:05:00 OpenStack Infra nova/havana: status In Progress Fix Committed
2013-12-14 20:19:41 Alan Pevec nova/havana: milestone 2013.2.1
2013-12-16 20:40:17 Alan Pevec nova/havana: status Fix Committed Fix Released
2014-03-17 07:37:42 Nikola Đipanov nova: milestone icehouse-rc1
2014-03-19 14:44:18 Tracy Jones nova: status In Progress Fix Committed
2014-03-30 23:01:12 Alan Pevec tags bdm createvm folsom-backport-potential grizzly-backport-potential bdm createvm
2014-03-31 19:02:32 Thierry Carrez nova: status Fix Committed Fix Released
2014-04-17 09:12:43 Thierry Carrez nova: milestone icehouse-rc1 2014.1