3) Find the last instance of the request id
2013-12-18 19:29:22.792 DEBUG nova.openstack.common.processutils [req-3333ca13-880e-4df7-84bf-35b68dd4490a demo demo] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf mount /dev/nbd6 /tmp/openstack-vfs-localfs_CB_RP execute /opt/stack/new/nova/nova/openstack/common/processutils.py:147
Could mount be hung? given that /dev/nbd6 is being reused (see [3])?
I think i see at least one pattern.
1) Looking at log [1]
Found this line: 0c74-4712- 8e7c-36280d2312 79 to become ACTIVE
Details: Timed out waiting for thing 2da86cc2-
2) Looking at the n-cpu-log, find the request id [2]
2013-12-18 19:29:21.909 AUDIT nova.compute. manager [req-3333ca13- 880e-4df7- 84bf-35b68dd449 0a demo demo] [instance: 2da86cc2- 0c74-4712- 8e7c-36280d2312 79] Starting instance...
3) Find the last instance of the request id common. processutils [req-3333ca13- 880e-4df7- 84bf-35b68dd449 0a demo demo] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/ rootwrap. conf mount /dev/nbd6 /tmp/openstack- vfs-localfs_ CB_RP execute /opt/stack/ new/nova/ nova/openstack/ common/ processutils. py:147
2013-12-18 19:29:22.792 DEBUG nova.openstack.
Could mount be hung? given that /dev/nbd6 is being reused (see [3])?
[1] http:// logs.openstack. org/30/ 62530/2/ gate/gate- tempest- dsvm-neutron- pg/2e085b3/ console. html logs.openstack. org/30/ 62530/2/ gate/gate- tempest- dsvm-neutron- pg/2e085b3/ logs/screen- n-cpu.txt. gz logs.openstack. org/30/ 62530/2/ gate/gate- tempest- dsvm-neutron- pg/2e085b3/ logs/screen- n-cpu.txt. gz#_2013- 12-18_19_ 28_32_684
[2] http://
[3] http://
Other logs that has the same issue: logs.openstack. org/43/ 59743/12/ check/check- tempest- dsvm-neutron- pg/689a38c/ logs/screen- n-cpu.txt. gz#_2013- 12-18_15_ 11_50_960 logs.openstack. org/85/ 61085/2/ check/check- tempest- dsvm-neutron/ bb95e64/ logs/screen- n-cpu.txt. gz#_2013- 12-18_12_ 14_07_163
http://
http://