Activity log for bug #1026621

Date Who What changed Old value New value Message
2012-07-19 14:02:47 Nick Moffitt bug added bug
2012-07-19 14:14:25 Nick Moffitt bug added subscriber The Canonical Sysadmins
2012-07-19 14:15:01 Michael Nelson bug added subscriber Michael Nelson
2012-07-19 14:15:35 Nick Moffitt bug added subscriber Dave Walker
2012-07-19 15:02:20 Launchpad Janitor nova (Ubuntu): status New Confirmed
2012-07-19 21:54:13 Adam Gandelman bug task added nova
2012-08-06 15:25:39 Thierry Carrez bug added subscriber Thierry Carrez
2012-08-22 13:35:08 Thierry Carrez nova: status New Incomplete
2013-03-18 11:35:23 Davanum Srinivas (DIMS) nova: status Incomplete Confirmed
2013-09-19 19:19:10 Shawn Duex bug added subscriber Shawn Duex
2013-09-23 22:39:02 Jay Farschman bug added subscriber Jay Farschman
2013-09-26 03:07:18 Michael Still tags canonistack canonistack libvirt
2013-09-26 04:39:45 Michael Still nova: importance Undecided High
2013-09-26 04:39:50 Michael Still nova: status Confirmed Triaged
2014-03-05 07:06:04 sark2012 description We've been seeing a lot of instances simply vanish from the network. Usually people have been willing to work around this by simply rebooting or re-creating their instances, but it's troubling for long-running instances (especially those that have volumes associated). Here's the relevant bit of nova-network.log for one of these: 2012-07-16 14:06:32 DEBUG nova.rpc.amqp [-] received {u'_context_roles': [u'admin'], u'_context_request_id': u'req-d0905711-c4d1-4452-a3b2-46815d1983d7', u'_context_read_deleted': u'no', u'args': {u'address': u'10.55.60.141'}, u'_context_auth_token': '<SANITIZED>', u'_context_is_admin': True, u'_context_project_id': None, u'_context_timestamp': u'2012-07-16T14:06:32.169100', u'_context_user_id': None, u'method': u'release_fixed_ip', u'_context_remote_address': None} from (pid=493) _safe_log /usr/lib/python2.7/dist-packages/nova/rpc/common.py:160 2012-07-16 14:06:32 DEBUG nova.rpc.amqp [req-d0905711-c4d1-4452-a3b2-46815d1983d7 None None] unpacked context: {'user_id': None, 'roles': [u'admin'], 'timestamp': '2012-07-16T14:06:32.169100', 'auth_token': '<SANITIZED>', 'remote_address': None, 'is_admin': True, 'request_id': u'req-d0905711-c4d1-4452-a3b2-46815d1983d7', 'project_id': None, 'read_deleted': u'no'} from (pid=493) _safe_log /usr/lib/python2.7/dist-packages/nova/rpc/common.py:160 2012-07-16 14:06:32 DEBUG nova.network.manager [req-d0905711-c4d1-4452-a3b2-46815d1983d7 None None] Released IP |10.55.60.141| from (pid=493) release_fixed_ip /usr/lib/python2.7/dist-packages/nova/network/manager.py:1260 Then the dhcpbridge shows it being revoked: 2012-07-16 14:04:29 DEBUG nova.dhcpbridge [-] Called 'old' for mac 'fa:16:3e:11:c5:37' with ip '10.55.60.141' from (pid=23699) main /usr/bin/nova-dhcpbridge:113 2012-07-16 14:06:32 DEBUG nova.dhcpbridge [-] Called 'del' for mac 'fa:16:3e:11:c5:37' with ip '10.55.60.141' from (pid=24946) main /usr/bin/nova-dhcpbridge:113 Is there any way we can find out what might have placed the release_fixed_ip event on the message queue? There doesn't seeem to be any other mention of the IP in the nova logs on any of our systems. We've been seeing a lot of instances simply vanish from the network. Usually people have been willing to work around this by simply rebooting or re-creating their instances, but it's troubling for long-running instances (especially those that have volumes associated). Here's the relevant bit of nova-network.log for one of these:     2012-07-16 14:06:32 DEBUG nova.rpc.amqp [-] received {u'_context_roles': [u'admin'], u'_context_request_id': u'req-d0905711-c4d1-4452-a3b2-46815d1983d7', u'_context_read_deleted': u'no', u'args': {u'address': u'10.55.60.141'}, u'_context_auth_token': '<SANITIZED>', u'_context_is_admin': True, u'_context_project_id': None, u'_context_timestamp': u'2012-07-16T14:06:32.169100', u'_context_user_id': None, u'method': u'release_fixed_ip', u'_context_remote_address': None} from (pid=493) _safe_log /usr/lib/python2.7/dist-packages/nova/rpc/common.py:160     2012-07-16 14:06:32 DEBUG nova.rpc.amqp [req-d0905711-c4d1-4452-a3b2-46815d1983d7 None None] unpacked context: {'user_id': None, 'roles': [u'admin'], 'timestamp': '2012-07-16T14:06:32.169100', 'auth_token': '<SANITIZED>', 'remote_address': None, 'is_admin': True, 'request_id': u'req-d0905711-c4d1-4452-a3b2-46815d1983d7', 'project_id': None, 'read_deleted': u'no'} from (pid=493) _safe_log /usr/lib/python2.7/dist-packages/nova/rpc/common.py:160     2012-07-16 14:06:32 DEBUG nova.network.manager [req-d0905711-c4d1-4452-a3b2-46815d1983d7 None None] Released IP |10.55.60.141| from (pid=493) release_fixed_ip /usr/lib/python2.7/dist-packages/nova/network/manager.py:1260 Then the dhcpbridge shows it being revoked:     2012-07-16 14:04:29 DEBUG nova.dhcpbridge [-] Called 'old' for mac 'fa:16:3e:11:c5:37' with ip '10.55.60.141' from (pid=23699) main /usr/bin/nova-dhcpbridge:113     2012-07-16 14:06:32 DEBUG nova.dhcpbridge [-] Called 'del' for mac 'fa:16:3e:11:c5:37' with ip '10.55.60.141' from (pid=24946) main /usr/bin/nova-dhcpbridge:113 Is there any way we can find out what might have placed the release_fixed_ip event on the message queue? There doesn't seeem to be any other mention of the IP in the nova logs on any of our systems.
2014-08-01 12:56:39 James Page nova (Ubuntu): status Confirmed Triaged
2014-08-01 12:56:44 James Page nova (Ubuntu): importance Undecided Medium
2015-03-21 13:43:12 Tong Da bug added subscriber Tong Da
2015-03-30 14:27:27 Sean Dague marked as duplicate 1231254