Activity log for bug #1371592

Date Who What changed Old value New value Message
2014-09-19 13:04:39 Sandip Dey bug added bug
2014-09-20 01:57:03 Sandip Dey bug task added juniperopenstack
2014-09-20 01:57:15 Sandip Dey bug task deleted toontowninfinite
2014-09-22 07:13:51 Nagabhushana R tags sanity nova openstack sanity
2014-09-23 22:53:51 Ashish Ranjan juniperopenstack: assignee Hampapur Ajay (hajay)
2014-09-24 03:45:10 Vedamurthy Joshi nominated for series juniperopenstack/r1.1
2014-09-24 03:45:10 Vedamurthy Joshi bug task added juniperopenstack/r1.1
2014-09-24 03:45:10 Vedamurthy Joshi nominated for series juniperopenstack/trunk
2014-09-24 03:45:10 Vedamurthy Joshi bug task added juniperopenstack/trunk
2014-09-24 03:45:27 Vedamurthy Joshi juniperopenstack/r1.1: assignee Hampapur Ajay (hajay)
2014-09-24 03:45:31 Vedamurthy Joshi juniperopenstack/r1.1: importance Undecided High
2014-09-24 03:45:34 Vedamurthy Joshi juniperopenstack/r1.1: milestone r1.20
2014-09-24 03:45:42 Vedamurthy Joshi juniperopenstack/trunk: importance Undecided High
2014-09-24 10:24:50 Nagabhushana R bug task deleted juniperopenstack/r1.1
2014-09-24 10:25:02 Nagabhushana R tags nova openstack sanity blocker nova openstack sanity
2014-10-20 16:43:12 Nagabhushana R summary Mainline 2233:Nova runtime error while running tests in parallel Mainline 2333:Nova runtime error while running tests in parallel
2014-12-02 12:15:36 Vinay Mahuli nominated for series juniperopenstack/r2.0
2014-12-02 12:15:36 Vinay Mahuli bug task added juniperopenstack/r2.0
2014-12-02 12:15:36 Vinay Mahuli bug task added juniperopenstack/r2.0
2014-12-04 06:36:16 Nagabhushana R juniperopenstack/r2.0: assignee Hampapur Ajay (hajay)
2014-12-04 06:36:47 Nagabhushana R juniperopenstack/r2.0: importance Undecided High
2014-12-04 06:52:37 Hampapur Ajay juniperopenstack/r2.0: status New Fix Committed
2014-12-04 06:52:41 Hampapur Ajay juniperopenstack/trunk: status New Fix Committed
2017-10-24 18:17:46 Vedamurthy Joshi description Main line : 2333 Centos 64 Getting the following error while running sanity in parallel This could be affecting the sanity result. logs saved at /home/bhushana/Documents/technical/bugs/<bug-id> mayamruga.englab.juniper.net. Login with the following credentials: USN : bhushana PWD : bhu@123 Logs === 2014-09-19 04:41:00.165 4355 TRACE nova.compute.manager [instance: fcae8094-4bf6-4401-b081-0d3fd6221f7f] chunk = self.read(sz - have) 2014-09-19 04:41:00.165 4355 TRACE nova.compute.manager [instance: fcae8094-4bf6-4401-b081-0d3fd6221f7f] File "/usr/lib64/python2.6/site-packages/thrift/transport/TSocket.py", line 103, in read 2014-09-19 04:41:00.165 4355 TRACE nova.compute.manager [instance: fcae8094-4bf6-4401-b081-0d3fd6221f7f] buff = self.handle.recv(sz) 2014-09-19 04:41:00.165 4355 TRACE nova.compute.manager [instance: fcae8094-4bf6-4401-b081-0d3fd6221f7f] File "/usr/lib/python2.6/site-packages/eventlet/greenio.py", line 264, in recv 2014-09-19 04:41:00.165 4355 TRACE nova.compute.manager [instance: fcae8094-4bf6-4401-b081-0d3fd6221f7f] timeout_exc=socket.timeout("timed out")) 2014-09-19 04:41:00.165 4355 TRACE nova.compute.manager [instance: fcae8094-4bf6-4401-b081-0d3fd6221f7f] File "/usr/lib/python2.6/site-packages/eventlet/hubs/__init__.py", line 151, in trampoline 2014-09-19 04:41:00.165 4355 TRACE nova.compute.manager [instance: fcae8094-4bf6-4401-b081-0d3fd6221f7f] listener = hub.add(hub.READ, fileno, current.switch) 2014-09-19 04:41:00.165 4355 TRACE nova.compute.manager [instance: fcae8094-4bf6-4401-b081-0d3fd6221f7f] File "/usr/lib/python2.6/site-packages/eventlet/hubs/epolls.py", line 48, in add 2014-09-19 04:41:00.165 4355 TRACE nova.compute.manager [instance: fcae8094-4bf6-4401-b081-0d3fd6221f7f] listener = BaseHub.add(self, evtype, fileno, cb) 2014-09-19 04:41:00.165 4355 TRACE nova.compute.manager [instance: fcae8094-4bf6-4401-b081-0d3fd6221f7f] File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 126, in add 2014-09-19 04:41:00.165 4355 TRACE nova.compute.manager [instance: fcae8094-4bf6-4401-b081-0d3fd6221f7f] evtype, fileno, evtype)) 2014-09-19 04:41:00.165 4355 TRACE nova.compute.manager [instance: fcae8094-4bf6-4401-b081-0d3fd6221f7f] RuntimeError: Second simultaneous read on fileno 9 detected. Unless you really know what you're doing, make sure that only one greenthread can read any particular socket. Consider using a pools.Pool. If you do know what you're doing and want to disable this error, call eventlet.debug.hub_prevent_multiple_readers(False) 2014-09-19 04:41:00.165 4355 TRACE nova.compute.manager [instance: fcae8094-4bf6-4401-b081-0d3fd6221f7f] 2014-09-19 04:41:02.193 4355 ERROR nova.virt.libvirt.driver [-] [instance: fcae8094-4bf6-4401-b081-0d3fd6221f7f] During wait destroy, instance disappeared. 2014-09-19 04:41:02.548 4355 ERROR nova.virt.driver [-] Exception dispatching event <nova.virt.event.LifecycleEvent object at 0x49a0c90>: Info cache for instance d5e30215-51ad-4f11-b641-afb9c4a873e3 could not be found. Main line : 2333 Centos 64 Getting the following error while running sanity in parallel This could be affecting the sanity result. logs saved at /cs-shared/bugs/<bug-id> on any blr shell server (ex nodeb6) Logs === 2014-09-19 04:41:00.165 4355 TRACE nova.compute.manager [instance: fcae8094-4bf6-4401-b081-0d3fd6221f7f] chunk = self.read(sz - have) 2014-09-19 04:41:00.165 4355 TRACE nova.compute.manager [instance: fcae8094-4bf6-4401-b081-0d3fd6221f7f] File "/usr/lib64/python2.6/site-packages/thrift/transport/TSocket.py", line 103, in read 2014-09-19 04:41:00.165 4355 TRACE nova.compute.manager [instance: fcae8094-4bf6-4401-b081-0d3fd6221f7f] buff = self.handle.recv(sz) 2014-09-19 04:41:00.165 4355 TRACE nova.compute.manager [instance: fcae8094-4bf6-4401-b081-0d3fd6221f7f] File "/usr/lib/python2.6/site-packages/eventlet/greenio.py", line 264, in recv 2014-09-19 04:41:00.165 4355 TRACE nova.compute.manager [instance: fcae8094-4bf6-4401-b081-0d3fd6221f7f] timeout_exc=socket.timeout("timed out")) 2014-09-19 04:41:00.165 4355 TRACE nova.compute.manager [instance: fcae8094-4bf6-4401-b081-0d3fd6221f7f] File "/usr/lib/python2.6/site-packages/eventlet/hubs/__init__.py", line 151, in trampoline 2014-09-19 04:41:00.165 4355 TRACE nova.compute.manager [instance: fcae8094-4bf6-4401-b081-0d3fd6221f7f] listener = hub.add(hub.READ, fileno, current.switch) 2014-09-19 04:41:00.165 4355 TRACE nova.compute.manager [instance: fcae8094-4bf6-4401-b081-0d3fd6221f7f] File "/usr/lib/python2.6/site-packages/eventlet/hubs/epolls.py", line 48, in add 2014-09-19 04:41:00.165 4355 TRACE nova.compute.manager [instance: fcae8094-4bf6-4401-b081-0d3fd6221f7f] listener = BaseHub.add(self, evtype, fileno, cb) 2014-09-19 04:41:00.165 4355 TRACE nova.compute.manager [instance: fcae8094-4bf6-4401-b081-0d3fd6221f7f] File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 126, in add 2014-09-19 04:41:00.165 4355 TRACE nova.compute.manager [instance: fcae8094-4bf6-4401-b081-0d3fd6221f7f] evtype, fileno, evtype)) 2014-09-19 04:41:00.165 4355 TRACE nova.compute.manager [instance: fcae8094-4bf6-4401-b081-0d3fd6221f7f] RuntimeError: Second simultaneous read on fileno 9 detected. Unless you really know what you're doing, make sure that only one greenthread can read any particular socket. Consider using a pools.Pool. If you do know what you're doing and want to disable this error, call eventlet.debug.hub_prevent_multiple_readers(False) 2014-09-19 04:41:00.165 4355 TRACE nova.compute.manager [instance: fcae8094-4bf6-4401-b081-0d3fd6221f7f] 2014-09-19 04:41:02.193 4355 ERROR nova.virt.libvirt.driver [-] [instance: fcae8094-4bf6-4401-b081-0d3fd6221f7f] During wait destroy, instance disappeared. 2014-09-19 04:41:02.548 4355 ERROR nova.virt.driver [-] Exception dispatching event <nova.virt.event.LifecycleEvent object at 0x49a0c90>: Info cache for instance d5e30215-51ad-4f11-b641-afb9c4a873e3 could not be found.