Activity log for bug #1431406

Date Who What changed Old value New value Message
2015-03-12 14:56:07 Michael Steffens bug added bug
2015-03-12 14:56:58 Michael Steffens tags iscsi windows
2015-03-16 08:19:09 Michael Steffens description We are regularly encountering this situation when deleting stacks managed Heat. It can be reproduced without heat, however, just using Nova and Cinder: 1. Create a Windows VM, for example using CloudBase's image windows_server_2012_r2_standard_eval_kvm_20140607. 2. Create a Volume (50 GB) an attach volume to the instance. 3. Log into instance. Start Computer Mangement -> Disk Management. 4. Online the disk. Initialize and format the volume, assign drive letter D. Create some small garbage data on D:. 5. In Glance detach volume from instance. (Without shutting down the instance first. This is apparently what Heat does when deleting a stack.) On the compute node you will now see dmesg and syslog being flooded with messages like [768938.979494] connection18:0: detected conn error (1020) about once per second. On the compute node iscsiadm --mode session --print=1 displays the initiatior iSCSI session still logged in, while on the Cinder storage node tgtadm --lld iscsi --op show --mode target shows that the iSCSI target is gone. The recurring connection errors on the compute node persist until manually logging off the iSCSI session. You may argue that performing the detachment while the volume is online and in use is unclean, therefore the issue being Heat's responsibility. However, even if that was the case, such an operation should not result in stale iSCSI sessions accumulating until manual intervention via root shell on the compute node. Additional information - We couldn't reproduce this problem with Linux instances. Even when detaching a volume while mounted and in use by the instance, iSCSI session are cleaned up gracefully. - We can reproduce this problem with both Icehouse and Juno. - We can reproduce the problem with both single and multi node OpenStack configurations, the latter using separate host for compute and storage. We are regularly encountering this situation when deleting stacks managed by Heat. It can be reproduced without heat, however, just using Nova and Cinder: 1. Create a Windows guest VM, for example using CloudBase's image windows_server_2012_r2_standard_eval_kvm_20140607. 2. Create a Volume (50 GB) and attach volume to the instance. 3. Log into instance. Start Computer Mangement -> Disk Management. 4. Online the disk. Initialize and format the volume, assign drive letter D. Create some small garbage data on D:. 5. In Glance detach volume from instance. (Without shutting down the instance first. This is apparently what Heat does when deleting a stack.) On the compute node you will now see dmesg and syslog being flooded with messages like    [768938.979494] connection18:0: detected conn error (1020) about once per second. On the compute node   iscsiadm --mode session --print=1 displays the iSCSI initiatior session still logged in, while on the Cinder storage node   tgtadm --lld iscsi --op show --mode target shows that the iSCSI target is gone. The recurring connection errors on the compute node persist until manually logging off the iSCSI session. You may argue that performing the detachment while the volume is online and in use is unclean, therefore the issue being Heat's responsibility. However, even if that was the case, such an operation should not result in stale iSCSI sessions accumulating until manual intervention via root shell on the compute node. Additional information:  - We couldn't reproduce this problem with Linux quest instances. Even when detaching a volume while mounted and in use by the instance, iSCSI session are cleaned up gracefully.  - We can reproduce this problem with both Icehouse and Juno.  - We can reproduce the problem with both single and multi node OpenStack configurations, the latter using separate hosts for compute and storage.
2015-08-13 13:29:36 Michael Steffens description We are regularly encountering this situation when deleting stacks managed by Heat. It can be reproduced without heat, however, just using Nova and Cinder: 1. Create a Windows guest VM, for example using CloudBase's image windows_server_2012_r2_standard_eval_kvm_20140607. 2. Create a Volume (50 GB) and attach volume to the instance. 3. Log into instance. Start Computer Mangement -> Disk Management. 4. Online the disk. Initialize and format the volume, assign drive letter D. Create some small garbage data on D:. 5. In Glance detach volume from instance. (Without shutting down the instance first. This is apparently what Heat does when deleting a stack.) On the compute node you will now see dmesg and syslog being flooded with messages like    [768938.979494] connection18:0: detected conn error (1020) about once per second. On the compute node   iscsiadm --mode session --print=1 displays the iSCSI initiatior session still logged in, while on the Cinder storage node   tgtadm --lld iscsi --op show --mode target shows that the iSCSI target is gone. The recurring connection errors on the compute node persist until manually logging off the iSCSI session. You may argue that performing the detachment while the volume is online and in use is unclean, therefore the issue being Heat's responsibility. However, even if that was the case, such an operation should not result in stale iSCSI sessions accumulating until manual intervention via root shell on the compute node. Additional information:  - We couldn't reproduce this problem with Linux quest instances. Even when detaching a volume while mounted and in use by the instance, iSCSI session are cleaned up gracefully.  - We can reproduce this problem with both Icehouse and Juno.  - We can reproduce the problem with both single and multi node OpenStack configurations, the latter using separate hosts for compute and storage. We are regularly encountering this situation when deleting stacks managed by Heat. It can be reproduced without heat, however, just using Nova and Cinder: 1. Create a Windows guest VM, for example using CloudBase's image windows_server_2012_r2_standard_eval_kvm_20140607. 2. Create a Volume (50 GB) and attach volume to the instance. 3. Log into instance. Start Computer Mangement -> Disk Management. 4. Online the disk. Initialize and format the volume, assign drive letter D. Create some small garbage data on D:. 5. In Cinder detach volume from instance. (Without shutting down the instance first. This is apparently what Heat does when deleting a stack.) On the compute node you will now see dmesg and syslog being flooded with messages like    [768938.979494] connection18:0: detected conn error (1020) about once per second. On the compute node   iscsiadm --mode session --print=1 displays the iSCSI initiatior session still logged in, while on the Cinder storage node   tgtadm --lld iscsi --op show --mode target shows that the iSCSI target is gone. The recurring connection errors on the compute node persist until manually logging off the iSCSI session. You may argue that performing the detachment while the volume is online and in use is unclean, therefore the issue being Heat's responsibility. However, even if that was the case, such an operation should not result in stale iSCSI sessions accumulating until manual intervention via root shell on the compute node. Additional information:  - We couldn't reproduce this problem with Linux quest instances. Even when detaching a volume while mounted and in use by the instance, iSCSI session are cleaned up gracefully.  - We can reproduce this problem with both Icehouse and Juno.  - We can reproduce the problem with both single and multi node OpenStack configurations, the latter using separate hosts for compute and storage.
2015-08-13 13:30:56 Michael Steffens description We are regularly encountering this situation when deleting stacks managed by Heat. It can be reproduced without heat, however, just using Nova and Cinder: 1. Create a Windows guest VM, for example using CloudBase's image windows_server_2012_r2_standard_eval_kvm_20140607. 2. Create a Volume (50 GB) and attach volume to the instance. 3. Log into instance. Start Computer Mangement -> Disk Management. 4. Online the disk. Initialize and format the volume, assign drive letter D. Create some small garbage data on D:. 5. In Cinder detach volume from instance. (Without shutting down the instance first. This is apparently what Heat does when deleting a stack.) On the compute node you will now see dmesg and syslog being flooded with messages like    [768938.979494] connection18:0: detected conn error (1020) about once per second. On the compute node   iscsiadm --mode session --print=1 displays the iSCSI initiatior session still logged in, while on the Cinder storage node   tgtadm --lld iscsi --op show --mode target shows that the iSCSI target is gone. The recurring connection errors on the compute node persist until manually logging off the iSCSI session. You may argue that performing the detachment while the volume is online and in use is unclean, therefore the issue being Heat's responsibility. However, even if that was the case, such an operation should not result in stale iSCSI sessions accumulating until manual intervention via root shell on the compute node. Additional information:  - We couldn't reproduce this problem with Linux quest instances. Even when detaching a volume while mounted and in use by the instance, iSCSI session are cleaned up gracefully.  - We can reproduce this problem with both Icehouse and Juno.  - We can reproduce the problem with both single and multi node OpenStack configurations, the latter using separate hosts for compute and storage. We are regularly encountering this situation when deleting stacks managed by Heat. It can be reproduced without heat, however, just using Nova and Cinder: 1. Create a Windows guest VM, for example using CloudBase's image windows_server_2012_r2_standard_eval_kvm_20140607. 2. Create a Volume (50 GB) and attach volume to the instance. 3. Log into instance. Start Computer Mangement -> Disk Management. 4. Online the disk. Initialize and format the volume, assign drive letter D. Create some small garbage data on D:. 5. In Nova detach volume from instance. (Without shutting down the instance first. This is apparently what Heat does when deleting a stack.) On the compute node you will now see dmesg and syslog being flooded with messages like    [768938.979494] connection18:0: detected conn error (1020) about once per second. On the compute node   iscsiadm --mode session --print=1 displays the iSCSI initiatior session still logged in, while on the Cinder storage node   tgtadm --lld iscsi --op show --mode target shows that the iSCSI target is gone. The recurring connection errors on the compute node persist until manually logging off the iSCSI session. You may argue that performing the detachment while the volume is online and in use is unclean, therefore the issue being Heat's responsibility. However, even if that was the case, such an operation should not result in stale iSCSI sessions accumulating until manual intervention via root shell on the compute node. Additional information:  - We couldn't reproduce this problem with Linux quest instances. Even when detaching a volume while mounted and in use by the instance, iSCSI session are cleaned up gracefully.  - We can reproduce this problem with both Icehouse and Juno.  - We can reproduce the problem with both single and multi node OpenStack configurations, the latter using separate hosts for compute and storage.
2015-08-13 13:47:22 Michael Steffens attachment added Sequence of volume detachment from a windows guest instance during stack destruction https://bugs.launchpad.net/cinder/+bug/1431406/+attachment/4444170/+files/nova-compute.log
2016-05-12 15:20:24 Louis Bouchard bug task added ubuntu
2016-05-12 15:20:40 Louis Bouchard bug task deleted ubuntu
2016-05-12 15:23:25 Louis Bouchard bug task added cinder (Ubuntu)
2016-05-12 15:24:27 Louis Bouchard nominated for series Ubuntu Vivid
2016-05-12 15:24:27 Louis Bouchard bug task added cinder (Ubuntu Vivid)
2016-05-12 15:24:27 Louis Bouchard nominated for series Ubuntu Trusty
2016-05-12 15:24:27 Louis Bouchard bug task added cinder (Ubuntu Trusty)
2016-08-15 02:17:43 adaqi bug added subscriber adaqi
2016-09-08 11:30:41 James Page bug task deleted cinder (Ubuntu Trusty)
2016-09-08 11:30:52 James Page bug task deleted cinder (Ubuntu Vivid)
2016-09-08 11:48:38 James Page cinder (Ubuntu): importance Undecided Medium
2017-03-02 20:18:02 Corey Bryant cinder (Ubuntu): status New Triaged