In one case, we're attaching the encrypted luks volume to the instance here:
http://logs.openstack.org/93/156693/7/check/check-tempest-dsvm-postgres-full/d3b26e8/logs/screen-n-cpu.txt.gz#_2015-03-12_16_38_09_061
We initialize the connection and get the connection_info back here:
http://logs.openstack.org/93/156693/7/check/check-tempest-dsvm-postgres-full/d3b26e8/logs/screen-n-cpu.txt.gz#_2015-03-12_16_38_11_064
I see an os-attach call here:
http://logs.openstack.org/93/156693/7/check/check-tempest-dsvm-postgres-full/d3b26e8/logs/screen-n-cpu.txt.gz#_2015-03-12_16_38_15_223
We start detaching the volume here:
http://logs.openstack.org/93/156693/7/check/check-tempest-dsvm-postgres-full/d3b26e8/logs/screen-n-cpu.txt.gz#_2015-03-12_16_38_16_902
We're failing to detach the volume here:
http://logs.openstack.org/93/156693/7/check/check-tempest-dsvm-postgres-full/d3b26e8/logs/screen-n-cpu.txt.gz#_2015-03-12_16_38_17_567
And six minutes later we're terminating the bdm for that volume here:
http://logs.openstack.org/93/156693/7/check/check-tempest-dsvm-postgres-full/d3b26e8/logs/screen-n-cpu.txt.gz#_2015-03-12_16_44_54_876
After failing to detach, I'm also seeing the same volume_id showing up in the logs in other test runs:
VolumesV1SnapshotTestJSON:
http://logs.openstack.org/93/156693/7/check/check-tempest-dsvm-postgres-full/d3b26e8/logs/screen-n-cpu.txt.gz#_2015-03-12_16_42_03_507
TestMinimumBasicScenario:
http://logs.openstack.org/93/156693/7/check/check-tempest-dsvm-postgres-full/d3b26e8/logs/screen-n-cpu.txt.gz#_2015-03-12_16_44_40_119
In one case, we're attaching the encrypted luks volume to the instance here:
http:// logs.openstack. org/93/ 156693/ 7/check/ check-tempest- dsvm-postgres- full/d3b26e8/ logs/screen- n-cpu.txt. gz#_2015- 03-12_16_ 38_09_061
We initialize the connection and get the connection_info back here:
http:// logs.openstack. org/93/ 156693/ 7/check/ check-tempest- dsvm-postgres- full/d3b26e8/ logs/screen- n-cpu.txt. gz#_2015- 03-12_16_ 38_11_064
I see an os-attach call here:
http:// logs.openstack. org/93/ 156693/ 7/check/ check-tempest- dsvm-postgres- full/d3b26e8/ logs/screen- n-cpu.txt. gz#_2015- 03-12_16_ 38_15_223
We start detaching the volume here:
http:// logs.openstack. org/93/ 156693/ 7/check/ check-tempest- dsvm-postgres- full/d3b26e8/ logs/screen- n-cpu.txt. gz#_2015- 03-12_16_ 38_16_902
We're failing to detach the volume here:
http:// logs.openstack. org/93/ 156693/ 7/check/ check-tempest- dsvm-postgres- full/d3b26e8/ logs/screen- n-cpu.txt. gz#_2015- 03-12_16_ 38_17_567
And six minutes later we're terminating the bdm for that volume here:
http:// logs.openstack. org/93/ 156693/ 7/check/ check-tempest- dsvm-postgres- full/d3b26e8/ logs/screen- n-cpu.txt. gz#_2015- 03-12_16_ 44_54_876
After failing to detach, I'm also seeing the same volume_id showing up in the logs in other test runs:
VolumesV1Snapsh otTestJSON:
http:// logs.openstack. org/93/ 156693/ 7/check/ check-tempest- dsvm-postgres- full/d3b26e8/ logs/screen- n-cpu.txt. gz#_2015- 03-12_16_ 42_03_507
TestMinimumBasi cScenario:
http:// logs.openstack. org/93/ 156693/ 7/check/ check-tempest- dsvm-postgres- full/d3b26e8/ logs/screen- n-cpu.txt. gz#_2015- 03-12_16_ 44_40_119