OpenStack Compute (Nova)

terminate instance didn't clean ip information

Reported by Jay Lee on 2011-12-01
14
This bug affects 3 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
High
Vish Ishaya

Bug Description

It's a command to terminate instance.
# nova delete <instance-id>
then, compute-api call self.db.instance_destroy(context, instance_id)

nova/compute/manager.py
 555 def _delete_instance(self, context, instance_id):
 556 """Delete an instance on this host."""
 557 self._shutdown_instance(context, instance_id, 'Terminating', True)
 558 self._cleanup_volumes(context, instance_id)
 559 instance = self.db.instance_get(context.elevated(), instance_id)
 560 self._instance_update(context,
 561 instance_id,
 562 vm_state=vm_states.DELETED,
 563 task_state=None,
 564 terminated_at=utils.utcnow())
 565
 566 self.db.instance_destroy(context, instance_id)
 567
 568 usage_info = utils.usage_from_instance(instance)
 569 notifier.notify('compute.%s' % self.host,
 570 'compute.instance.delete',
 571 notifier.INFO, usage_info)

finally, instance_destroy() is called at nova/db/sqlalchemy/api.py
1168 @require_context
1169 def instance_destroy(context, instance_id):
1170 session = get_session()
1171 with session.begin():
1172 session.query(models.Instance).\
1173 filter_by(id=instance_id).\
1174 update({'deleted': True,
1175 'deleted_at': utils.utcnow(),
1176 'updated_at': literal_column('updated_at')})
1177 session.query(models.SecurityGroupInstanceAssociation).\
1178 filter_by(instance_id=instance_id).\
1179 update({'deleted': True,
1180 'deleted_at': utils.utcnow(),
1181 'updated_at': literal_column('updated_at')})
1182 session.query(models.InstanceMetadata).\
1183 filter_by(instance_id=instance_id).\
1184 update({'deleted': True,
1185 'deleted_at': utils.utcnow(),
1186 'updated_at': literal_column('updated_at')})
1187 session.query(models.BlockDeviceMapping).\
1188 filter_by(instance_id=instance_id).\
1189 update({'deleted': True,
1190 'deleted_at': utils.utcnow(),
1191 'updated_at': literal_column('updated_at')})

But, this update didn't clean or delete any network information of instance.
 instance_destroy() update only instances, security_group_instance_association, instance_metadata, block_device_mapping table.
Because this method didn't update fixed_ips and floating_ips, there is error occured when I allocate all fixed or floating ip address to instance.

this is example)
# nova list
+--------------------------------------+--------------+--------+------------------------------------+
| ID | Name | Status | Networks |
+--------------------------------------+--------------+--------+------------------------------------+
| 1fcc667d-a8cd-407b-864c-21b021bfbb19 | CentOS_1 | ACTIVE | private=10.0.152.23 |
| a2561d60-5bb4-4429-b946-67df5abb247d | network_test | ACTIVE | private=10.0.152.31 |
| bbfa18ce-a21a-460c-8689-e746e0583939 | backend | ACTIVE | private=10.0.152.21 |
| cf737e81-aa7a-4ce5-b8bd-bce503d96b06 | wintest | ACTIVE | private=10.0.152.22, 192.168.0.208 |
+--------------------------------------+--------------+--------+------------------------------------+

mysql>select * from fixed_ips where deleted=0;
mysql> select * from fixed_ips where deleted=0;
+---------------------+---------------------+------------+---------+----+-------------+------------+-------------+-----------+--------+----------+----------------------+---------+
| created_at | updated_at | deleted_at | deleted | id | address | network_id | instance_id | allocated | leased | reserved | virtual_interface_id | host |
+---------------------+---------------------+------------+---------+----+-------------+------------+-------------+-----------+--------+----------+----------------------+---------+
| 2011-11-10 01:41:21 | NULL | NULL | 0 | 1 | 10.0.152.0 | 1 | NULL | 0 | 0 | 1 | NULL | NULL |
| 2011-11-10 01:41:21 | NULL | NULL | 0 | 2 | 10.0.152.1 | 1 | NULL | 0 | 0 | 1 | NULL | NULL |
| 2011-11-10 01:41:21 | 2011-11-15 01:40:58 | NULL | 0 | 3 | 10.0.152.2 | 1 | 1 | 0 | 0 | 0 | NULL | NULL |
| 2011-11-10 01:41:21 | 2011-11-10 02:22:19 | NULL | 0 | 4 | 10.0.152.3 | 1 | NULL | 0 | 0 | 0 | NULL | nova152 |
| 2011-11-10 01:41:21 | 2011-11-15 01:37:06 | NULL | 0 | 5 | 10.0.152.4 | 1 | 2 | 0 | 0 | 0 | NULL | NULL |
| 2011-11-10 01:41:21 | 2011-11-15 04:32:53 | NULL | 0 | 6 | 10.0.152.5 | 1 | 3 | 0 | 0 | 0 | NULL | NULL |
| 2011-11-10 01:41:21 | 2011-11-15 04:32:47 | NULL | 0 | 7 | 10.0.152.6 | 1 | 4 | 0 | 0 | 0 | NULL | NULL |
| 2011-11-10 01:41:21 | 2011-11-15 04:32:42 | NULL | 0 | 8 | 10.0.152.7 | 1 | 5 | 0 | 0 | 0 | NULL | NULL |
| 2011-11-10 01:41:21 | 2011-11-15 04:32:59 | NULL | 0 | 9 | 10.0.152.8 | 1 | 6 | 0 | 0 | 0 | NULL | NULL |
| 2011-11-10 01:41:21 | 2011-11-17 23:47:20 | NULL | 0 | 10 | 10.0.152.9 | 1 | 7 | 1 | 0 | 0 | 7 | NULL |
| 2011-11-10 01:41:21 | 2011-11-17 23:47:21 | NULL | 0 | 11 | 10.0.152.10 | 1 | 8 | 1 | 0 | 0 | 8 | NULL |
| 2011-11-10 01:41:21 | 2011-11-17 23:47:22 | NULL | 0 | 12 | 10.0.152.11 | 1 | 9 | 1 | 0 | 0 | 9 | NULL |
| 2011-11-10 01:41:21 | 2011-11-17 23:47:24 | NULL | 0 | 13 | 10.0.152.12 | 1 | 10 | 1 | 0 | 0 | 10 | NULL |
| 2011-11-10 01:41:21 | 2011-11-17 23:47:25 | NULL | 0 | 14 | 10.0.152.13 | 1 | 11 | 1 | 0 | 0 | 11 | NULL |
| 2011-11-10 01:41:21 | 2011-11-15 05:39:57 | NULL | 0 | 15 | 10.0.152.14 | 1 | 12 | 0 | 0 | 0 | NULL | NULL |
| 2011-11-10 01:41:21 | 2011-11-16 06:29:44 | NULL | 0 | 16 | 10.0.152.15 | 1 | 13 | 1 | 0 | 0 | 13 | NULL |
| 2011-11-10 01:41:21 | 2011-11-16 07:32:12 | NULL | 0 | 17 | 10.0.152.16 | 1 | 14 | 1 | 0 | 0 | 14 | NULL |
| 2011-11-10 01:41:21 | 2011-11-17 23:47:27 | NULL | 0 | 18 | 10.0.152.17 | 1 | 15 | 1 | 0 | 0 | 15 | NULL |
| 2011-11-10 01:41:21 | 2011-11-17 23:47:28 | NULL | 0 | 19 | 10.0.152.18 | 1 | 16 | 1 | 0 | 0 | 16 | NULL |
| 2011-11-10 01:41:21 | 2011-11-16 08:22:54 | NULL | 0 | 20 | 10.0.152.19 | 1 | 17 | 0 | 0 | 0 | NULL | NULL |
| 2011-11-10 01:41:21 | 2011-11-16 08:28:41 | NULL | 0 | 21 | 10.0.152.20 | 1 | 18 | 0 | 0 | 0 | NULL | NULL |
| 2011-11-10 01:41:21 | 2011-12-01 05:44:58 | NULL | 0 | 22 | 10.0.152.21 | 1 | 19 | 1 | 1 | 0 | 19 | NULL |
| 2011-11-10 01:41:21 | 2011-12-01 05:44:17 | NULL | 0 | 23 | 10.0.152.22 | 1 | 20 | 1 | 1 | 0 | 20 | NULL |
| 2011-11-10 01:41:21 | 2011-12-01 05:44:54 | NULL | 0 | 24 | 10.0.152.23 | 1 | 21 | 1 | 1 | 0 | 21 | NULL |
| 2011-11-10 01:41:21 | 2011-11-29 08:51:37 | NULL | 0 | 25 | 10.0.152.24 | 1 | 22 | 0 | 0 | 0 | NULL | NULL |
| 2011-11-10 01:41:21 | 2011-11-29 08:54:12 | NULL | 0 | 26 | 10.0.152.25 | 1 | 23 | 0 | 0 | 0 | NULL | NULL |
| 2011-11-10 01:41:21 | 2011-11-29 08:53:49 | NULL | 0 | 27 | 10.0.152.26 | 1 | 24 | 0 | 0 | 0 | NULL | NULL |
| 2011-11-10 01:41:21 | 2011-11-30 02:00:44 | NULL | 0 | 28 | 10.0.152.27 | 1 | 25 | 0 | 0 | 0 | NULL | NULL |
| 2011-11-10 01:41:21 | 2011-12-01 04:13:22 | NULL | 0 | 29 | 10.0.152.28 | 1 | 26 | 0 | 0 | 0 | NULL | NULL |
| 2011-11-10 01:41:21 | 2011-11-30 11:14:07 | NULL | 0 | 30 | 10.0.152.29 | 1 | 27 | 0 | 0 | 0 | NULL | NULL |
| 2011-11-10 01:41:21 | 2011-11-30 02:05:08 | NULL | 0 | 31 | 10.0.152.30 | 1 | 28 | 0 | 0 | 0 | NULL | NULL |
| 2011-11-10 01:41:21 | 2011-11-30 09:02:26 | NULL | 0 | 32 | 10.0.152.31 | 1 | 30 | 1 | 0 | 0 | 30 | NULL |
+---------------------+---------------------+------------+---------+----+-------------+------------+-------------+-----------+--------+----------+----------------------+---------+

every fixed_ips was allocated one time. But now There are only 4 instances, other instances was deleted.

then, I try to boot new instance, nova didn't.
/var/log/nova/nova-compute.log
2011-12-01 14:46:40,776 DEBUG nova.rpc [-] received {u'_context_roles': [], u'_context_request_id': u'16045694-f17a-4c22-bb5b-1207c2f72b24', u'_context_read_deleted': False, u'args': {u'instance_id': 32, u'requested_networks': None, u'admin_password': u'v4PTPrMAgn2Z', u'injected_files': []}, u'_context_auth_token': None, u'_context_strategy': u'noauth', u'_context_is_admin': True, u'_context_project_id': u'sds', u'_context_timestamp': u'2011-12-01T05:46:40.217989', u'_context_user_id': u'nova', u'method': u'run_instance', u'_context_remote_address': u'127.0.0.1'} from (pid=5977) __call__ /usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py:600
2011-12-01 14:46:40,777 DEBUG nova.rpc [-] unpacked context: {'user_id': u'nova', 'roles': [], 'timestamp': u'2011-12-01T05:46:40.217989', 'auth_token': None, 'msg_id': None, 'remote_address': u'127.0.0.1', 'strategy': u'noauth', 'is_admin': True, 'request_id': u'16045694-f17a-4c22-bb5b-1207c2f72b24', 'project_id': u'sds', 'read_deleted': False} from (pid=5977) _unpack_context /usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py:646
2011-12-01 14:46:47,914 AUDIT nova.compute.manager [16045694-f17a-4c22-bb5b-1207c2f72b24 nova sds] instance 32: starting...
2011-12-01 14:46:47,988 DEBUG nova.rpc [-] Making asynchronous call on network ... from (pid=5977) multicall /usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py:721
2011-12-01 14:46:47,988 DEBUG nova.rpc [-] MSG_ID is 9436655168cb45c7aea3ba4017fc58d4 from (pid=5977) multicall /usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py:724
2011-12-01 14:46:48,128 ERROR nova.compute.manager [-] Instance '32' failed network setup.
(nova.compute.manager): TRACE: Traceback (most recent call last):
(nova.compute.manager): TRACE: File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 415, in _logging_error
(nova.compute.manager): TRACE: yield
(nova.compute.manager): TRACE: File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 446, in _run_instance
(nova.compute.manager): TRACE: network_info = _make_network_info()
(nova.compute.manager): TRACE: File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 382, in _make_network_info
(nova.compute.manager): TRACE: requested_networks=requested_networks)
(nova.compute.manager): TRACE: File "/usr/lib/python2.7/dist-packages/nova/network/api.py", line 119, in allocate_for_instance
(nova.compute.manager): TRACE: 'args': args})
(nova.compute.manager): TRACE: File "/usr/lib/python2.7/dist-packages/nova/rpc/__init__.py", line 45, in call
(nova.compute.manager): TRACE: return get_impl().call(context, topic, msg)
(nova.compute.manager): TRACE: File "/usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py", line 738, in call
(nova.compute.manager): TRACE: rv = list(rv)
(nova.compute.manager): TRACE: File "/usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py", line 703, in __iter__
(nova.compute.manager): TRACE: raise result
(nova.compute.manager): TRACE: RemoteError: Remote error: NoMoreFixedIps Zero fixed ips available.
(nova.compute.manager): TRACE: [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py", line 620, in _process_data\n rval = node_func(context=ctxt, **node_args)\n', u' File "/usr/lib/python2.7/dist-packages/nova/network/manager.py", line 223, in allocate_for_instance\n super(FloatingIP, self).allocate_for_instance(context, **kwargs)\n', u' File "/usr/lib/python2.7/dist-packages/nova/network/manager.py", line 631, in allocate_for_instance\n requested_networks=requested_networks)\n', u' File "/usr/lib/python2.7/dist-packages/nova/network/manager.py", line 172, in _allocate_fixed_ips\n vpn=vpn, address=address)\n', u' File "/usr/lib/python2.7/dist-packages/nova/network/manager.py", line 802, in allocate_fixed_ip\n instance_id)\n', u' File "/usr/lib/python2.7/dist-packages/nova/db/api.py", line 366, in fixed_ip_associate_pool\n instance_id, host)\n', u' File "/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 102, in wrapper\n return f(*args, **kwargs)\n', u' File "/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 757, in fixed_ip_associate_pool\n raise exception.NoMoreFixedIps()\n', u'NoMoreFixedIps: Zero fixed ips available.\n'].

I think when terminate instance, nova should clean network ip information(fixed_ips, floating_ips).
OR, when boot new instance, nova should select ip to instance properly.

Thierry Carrez (ttx) wrote :

IPs are supposed to be disassociated after a timeout (by default 600 seconds), see fixed_ip_disassociate_timeout for example, so I think that's intended behavior ?

Changed in nova:
status: New → Incomplete
Jay Lee (hyangii) wrote :
Download full text (3.3 KiB)

Yes, I know fixed_ip_disassociate_timeout can disassociate ip address, but this method could run when host value in networks DB.
But, When we make a network, host value didn't put any value, So default host value is NULL.
For example)
# nova-manage network create test_network 10.102.0.0/24

mysql>select * from networks\G;
*************************** 2. row ***************************
         created_at: 2011-12-09 05:20:13
         updated_at: NULL
         deleted_at: NULL
            deleted: 0
                 id: 2
           injected: 0
               cidr: 10.102.0.0/24
            netmask: 255.255.255.0
             bridge: br100
            gateway: 10.102.0.1
          broadcast: 10.102.0.255
               dns1: 8.8.4.4
               vlan: NULL
 vpn_public_address: NULL
    vpn_public_port: NULL
vpn_private_address: NULL
         dhcp_start: 10.102.0.2
         project_id: NULL
               host: NULL
            cidr_v6: NULL
         gateway_v6: NULL
              label: test_network
         netmask_v6: NULL
   bridge_interface: bond1
         multi_host: 1
               dns2: NULL
               uuid: 07774f63-df94-4ddb-9274-800ff8e5ae20
           priority: NULL

because host value is null, fixed_ip_disassociate_timeout method didn't run properly.

this is source code of network.manager.periodic_tasks
 509 def periodic_tasks(self, context=None):
 510 """Tasks to be run at a periodic interval."""
 511 super(NetworkManager, self).periodic_tasks(context)
 512 print('timeout_fixed_ips: {}').format(self.timeout_fixed_ips)
 513 if self.timeout_fixed_ips:
 514 now = utils.utcnow()
 515 timeout = FLAGS.fixed_ip_disassociate_timeout
 516 time = now - datetime.timedelta(seconds=timeout)
 517 num = self.db.fixed_ip_disassociate_all_by_timeout(context,
 518 self.host,
 519 time)

maybe nova-network send self.host and time value through self.db.fixed_ip_disassociate_all_by_timeout.

but In database, host is null so

self.host != host value in db

db.sqlalchemy.api.fixed_ip_disassociate_all_by_timeout could run properly when

self.host == host value in db

this is source code of db.sqlalchemy.api.fixed_ip_disassociate_all_by_timeout
 801 @require_admin_context
 802 def fixed_ip_disassociate_all_by_timeout(_context, host, time):
 803 session = get_session()
 804 inner_q = session.query(models.Network.id).\
 805 filter_by(host=host).\
 806 subquery()
 807 result = session.query(models.FixedIp).\
 808 filter(models.FixedIp.network_id.in_(inner_q)).\
 809 filter(models.FixedIp.updated_at < time).\
 810 filter(models.FixedIp.instance_id != None).\
 811 filter_by(allocated=False).\
 812 update({'instance_id': None,
 813 'leased': False,
 814 'updated_at': utils.utcnow()},
 815 synchronize...

Read more...

Download full text (19.0 KiB)

Ah, nice find. The disassociate timeout looks broken. It wasn't updated to handle multi_host networks properly. As a workaround, you can use --force_dhcp_release to make nova release the dhcp lease immediately when the vm is deleted.

On Dec 8, 2011, at 9:28 PM, jaesanglee wrote:

> Yes, I know fixed_ip_disassociate_timeout can disassociate ip address, but this method could run when host value in networks DB.
> But, When we make a network, host value didn't put any value, So default host value is NULL.
> For example)
> # nova-manage network create test_network 10.102.0.0/24
>
> mysql>select * from networks\G;
> *************************** 2. row ***************************
> created_at: 2011-12-09 05:20:13
> updated_at: NULL
> deleted_at: NULL
> deleted: 0
> id: 2
> injected: 0
> cidr: 10.102.0.0/24
> netmask: 255.255.255.0
> bridge: br100
> gateway: 10.102.0.1
> broadcast: 10.102.0.255
> dns1: 8.8.4.4
> vlan: NULL
> vpn_public_address: NULL
> vpn_public_port: NULL
> vpn_private_address: NULL
> dhcp_start: 10.102.0.2
> project_id: NULL
> host: NULL
> cidr_v6: NULL
> gateway_v6: NULL
> label: test_network
> netmask_v6: NULL
> bridge_interface: bond1
> multi_host: 1
> dns2: NULL
> uuid: 07774f63-df94-4ddb-9274-800ff8e5ae20
> priority: NULL
>
>
> because host value is null, fixed_ip_disassociate_timeout method didn't
> run properly.
>
> this is source code of network.manager.periodic_tasks
> 509 def periodic_tasks(self, context=None):
> 510 """Tasks to be run at a periodic interval."""
> 511 super(NetworkManager, self).periodic_tasks(context)
> 512 print('timeout_fixed_ips: {}').format(self.timeout_fixed_ips)
> 513 if self.timeout_fixed_ips:
> 514 now = utils.utcnow()
> 515 timeout = FLAGS.fixed_ip_disassociate_timeout
> 516 time = now - datetime.timedelta(seconds=timeout)
> 517 num = self.db.fixed_ip_disassociate_all_by_timeout(context,
> 518 self.host,
> 519 time)
>
> maybe nova-network send self.host and time value through
> self.db.fixed_ip_disassociate_all_by_timeout.
>
> but In database, host is null so
>
> self.host != host value in db
>
> db.sqlalchemy.api.fixed_ip_disassociate_all_by_timeout could run
> properly when
>
> self.host == host value in db
>
> this is source code of db.sqlalchemy.api.fixed_ip_disassociate_all_by_timeout
> 801 @require_admin_context
> 802 def fixed_ip_disassociate_all_by_timeout(_context, host, time):
> 803 session = get_session()
> 804 inner_q = session.query(models.Network.id).\
> 805 filter_by(host=host).\
> 806 subquery()
> 807 result = session.query(models.FixedIp).\
> 808 filter(models.FixedIp.network_id.in_(inner_q)).\
> 809 ...

Changed in nova:
status: Incomplete → Triaged
importance: Undecided → High
Jay Lee (hyangii) wrote :

Vish, I understan your comment, But Problem isn't fix yet.
Irrespective of force_dhcp_release, when terminate instance, nova don't touch instance value of fixed_ips table.

 815 def deallocate_fixed_ip(self, context, address, **kwargs):
 816 """Returns a fixed ip to the pool."""
 817 self.db.fixed_ip_update(context, address,
 818 {'allocated': False,
 819 'virtual_interface_id': None})

(only update 'allocated', 'virtual_interface_id')

But, When create instance, instance get a fixed ip from the pool
 787 def allocate_fixed_ip(self, context, instance_id, network, **kwargs):
 788 """Gets a fixed ip from the pool."""
 789 # TODO(vish): when this is called by compute, we can associate compute
 790 # with a network, or a cluster of computes with a network
 791 # and use that network here with a method like
 792 # network_get_by_compute_host
 793 address = None
 794 if network['cidr']:
 795 address = kwargs.get('address', None)
 796 if address:
 797 address = self.db.fixed_ip_associate(context,
 798 address, instance_id,
 799 network['id'])
 800 else:
 801 address = self.db.fixed_ip_associate_pool(context.elevated(),
 802 network['id'],
 803 instance_id)

db.fixed_ip_associate_pool() is
 741 def fixed_ip_associate_pool(context, network_id, instance_id=None, host=None):
 742 session = get_session()
 743 with session.begin():
 744 network_or_none = or_(models.FixedIp.network_id == network_id,
 745 models.FixedIp.network_id == None)
 746 fixed_ip_ref = session.query(models.FixedIp).\
 747 filter(network_or_none).\
 748 filter_by(reserved=False).\
 749 filter_by(deleted=False).\
 750 filter_by(instance=None).\
 751 filter_by(host=None).\
 752 with_lockmode('update').\
 753 first()

method choice fixed_ip what instance==None.
when terminate instance, nova don't update instance value to NULL, So termiated ip address couldn't reuse.

Vish Ishaya (vishvananda) wrote :
Download full text (18.2 KiB)

when you use --force_dhcp_release it will result in a callback from dnsmasq that will call release_fixed_ip. Release fixed_ip will call self.db.fixed_ip_disassociate(context, address)

Note that this is necessary because we can't reuse an ip until dnsmasq is done with it.

Vish

On Dec 9, 2011, at 10:15 PM, jaesanglee wrote:

> Vish, I understan your comment, But Problem isn't fix yet.
> Irrespective of force_dhcp_release, when terminate instance, nova don't touch instance value of fixed_ips table.
>
> 815 def deallocate_fixed_ip(self, context, address, **kwargs):
> 816 """Returns a fixed ip to the pool."""
> 817 self.db.fixed_ip_update(context, address,
> 818 {'allocated': False,
> 819 'virtual_interface_id': None})
>
> (only update 'allocated', 'virtual_interface_id')
>
> But, When create instance, instance get a fixed ip from the pool
> 787 def allocate_fixed_ip(self, context, instance_id, network, **kwargs):
> 788 """Gets a fixed ip from the pool."""
> 789 # TODO(vish): when this is called by compute, we can associate compute
> 790 # with a network, or a cluster of computes with a network
> 791 # and use that network here with a method like
> 792 # network_get_by_compute_host
> 793 address = None
> 794 if network['cidr']:
> 795 address = kwargs.get('address', None)
> 796 if address:
> 797 address = self.db.fixed_ip_associate(context,
> 798 address, instance_id,
> 799 network['id'])
> 800 else:
> 801 address = self.db.fixed_ip_associate_pool(context.elevated(),
> 802 network['id'],
> 803 instance_id)
>
> db.fixed_ip_associate_pool() is
> 741 def fixed_ip_associate_pool(context, network_id, instance_id=None, host=None):
> 742 session = get_session()
> 743 with session.begin():
> 744 network_or_none = or_(models.FixedIp.network_id == network_id,
> 745 models.FixedIp.network_id == None)
> 746 fixed_ip_ref = session.query(models.FixedIp).\
> 747 filter(network_or_none).\
> 748 filter_by(reserved=False).\
> 749 filter_by(deleted=False).\
> 750 filter_by(instance=None).\
> 751 filter_by(host=None).\
> 752 with_lockmode('update').\
> 753 first()
>
> method choice fixed_ip what instance==None.
> when terminate instance, nova don't update instance value to NULL, So termiated ip address couldn't reuse.
>
> --
> You received this bug notification because you are subscribed to
> OpenStack Compute (nova).
> https://bugs.launchpad.net/bugs/898485
>
> Title:
> terminate instance didn't clean ip information
>
> Statu...

Changed in nova:
assignee: nobody → Vish Ishaya (vishvananda)

Reviewed: https://review.openstack.org/2437
Committed: http://github.com/openstack/nova/commit/fb6850a5b9d7d61a4bb96af8d1cd516fd139c1f8
Submitter: Jenkins
Branch: master

commit fb6850a5b9d7d61a4bb96af8d1cd516fd139c1f8
Author: Vishvananda Ishaya <email address hidden>
Date: Fri Dec 16 16:02:54 2011 -0800

    Makes disassociate by timeout work with multi-host

     * fixes bug 898485
     * updates timeout query to key off of instance host as well
     * removes unused status=1 in related query

    Change-Id: Ia11b3d4d3db930eb31ccc2f74635971ba6edc32c

Changed in nova:
status: Triaged → Fix Committed
Mark McLoughlin (markmc) wrote :

Is this applicable to Diablo? Anyone care to backport it, if so?

Mark McLoughlin (markmc) wrote :

Looks like we'd also need the fix for bug #907898 to be backported too

Changed in nova:
status: Fix Committed → Fix Released
Thierry Carrez (ttx) wrote :

Please keep FixCommitted until this is in milestone-proposed.

Changed in nova:
status: Fix Released → Fix Committed
Thierry Carrez (ttx) on 2012-01-25
Changed in nova:
milestone: none → essex-3
status: Fix Committed → Fix Released
Thierry Carrez (ttx) on 2012-04-05
Changed in nova:
milestone: essex-3 → 2012.1
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers