ironic hypervisor resource should be released for booting failed case

Bug #1446449 reported by Haomeng,Wang
10
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Ironic
Invalid
Undecided
vikas choudhary
OpenStack Compute (nova)
In Progress
Low
Unassigned

Bug Description

nova boot failed in spawn step, the ironic hypervisor show the mem/cpu/disk resource still be occpied by the nova instance which is in error status, I understand for such boot failed case, the ironic hypervisor resource should be released once the boot is completed with error.

[root@hbcontrol ~]# nova hypervisor-stats
+----------------------+-------+
| Property | Value |
+----------------------+-------+
| count | 1 |
| current_workload | 0 |
| disk_available_least | 30 |
| free_disk_gb | 0 |
| free_ram_mb | 0 |
| local_gb | 30 |
| local_gb_used | 30 |
| memory_mb | 2048 |
| memory_mb_used | 2048 |
| running_vms | 1 |
| vcpus | 2 |
| vcpus_used | 2 |
+----------------------+-------+
[root@hbcontrol ~]#

[root@hbcontrol ~]# ironic node-list
+--------------------------------------+------+---------------+-------------+--------------------+-------------+
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+------+---------------+-------------+--------------------+-------------+
| ccdce9d8-2f6a-4d7f-8c53-f89f289fd0a1 | None | None | power on | available | False |
+--------------------------------------+------+---------------+-------------+--------------------+-------------+
[root@hbcontrol ~]#

nova compute log
====================
2015-04-21 07:31:51.979 1337 INFO nova.compute.manager [req-9abbf97b-8bf0-495a-850e-7874dbb87a22 2b0fbc7394cf4459867f2957e268e2d2 db9f9ab6aef84239a0206c2bb810b55a - - -] [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] Starting instance...
2015-04-21 07:31:52.923 1337 INFO nova.compute.claims [-] [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] Attempting claim: memory 2048 MB, disk 30 GB
2015-04-21 07:31:52.928 1337 INFO nova.compute.claims [-] [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] Total memory: 2048 MB, used: 0.00 MB
2015-04-21 07:31:52.928 1337 INFO nova.compute.claims [-] [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] memory limit: 2048.00 MB, free: 2048.00 MB
2015-04-21 07:31:52.928 1337 INFO nova.compute.claims [-] [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] Total disk: 30 GB, used: 0.00 GB
2015-04-21 07:31:52.928 1337 INFO nova.compute.claims [-] [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] disk limit not specified, defaulting to unlimited
2015-04-21 07:31:53.002 1337 INFO nova.compute.claims [-] [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] Claim successful

2015-04-21 07:32:06.331 1337 ERROR nova.servicegroup.drivers.db [req-a53a836a-18b5-4da4-9e1a-3ad4a95de788 - - - - -] model server went away
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db Traceback (most recent call last):
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py", line 112, in _report_state
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db service.service_ref, state_catalog)
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/nova/conductor/api.py", line 164, in service_update
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db return self._manager.service_update(context, service, values)
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/nova/conductor/rpcapi.py", line 284, in service_update
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db service=service_p, values=values)
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 156, in call
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db retry=self.retry)
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 90, in _send
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db timeout=timeout, retry=retry)
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 349, in send
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db retry=retry)
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 338, in _send
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db result = self._waiter.wait(msg_id, timeout)
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 242, in wait
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db message = self.waiters.get(msg_id, timeout=timeout)
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 148, in get
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db 'to message ID %s' % msg_id)
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db MessagingTimeout: Timed out waiting for a reply to message ID 2bda87fd61fd445ca3f39081977efda9
2015-04-21 07:32:06.331 1337 TRACE nova.servicegroup.drivers.db
2015-04-21 07:32:07.450 1337 INFO oslo_messaging._drivers.amqpdriver [-] No calling threads waiting for msg_id : 2bda87fd61fd445ca3f39081977efda9
2015-04-21 07:32:07.451 1337 INFO oslo_messaging._drivers.amqpdriver [-] No calling threads waiting for msg_id : 2bda87fd61fd445ca3f39081977efda9
2015-04-21 07:32:07.453 1337 ERROR nova.servicegroup.drivers.db [req-a53a836a-18b5-4da4-9e1a-3ad4a95de788 - - - - -] Recovered model server connection!

2015-04-21 07:32:10.240 1337 INFO nova.scheduler.client.report [-] Compute_service record updated for ('hbcontrol', u'ccdce9d8-2f6a-4d7f-8c53-f89f289fd0a1')
2015-04-21 07:32:13.702 1337 INFO nova.scheduler.client.report [-] Compute_service record updated for ('hbcontrol', u'ccdce9d8-2f6a-4d7f-8c53-f89f289fd0a1')
2015-04-21 07:32:15.364 1337 ERROR nova.compute.manager [-] Instance failed network setup after 1 attempt(s)
2015-04-21 07:32:15.364 1337 TRACE nova.compute.manager Traceback (most recent call last):
2015-04-21 07:32:15.364 1337 TRACE nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1736, in _allocate_network_async
2015-04-21 07:32:15.364 1337 TRACE nova.compute.manager dhcp_options=dhcp_options)
2015-04-21 07:32:15.364 1337 TRACE nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 576, in allocate_for_instance
2015-04-21 07:32:15.364 1337 TRACE nova.compute.manager self._delete_ports(neutron, instance, created_port_ids)
2015-04-21 07:32:15.364 1337 TRACE nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2015-04-21 07:32:15.364 1337 TRACE nova.compute.manager six.reraise(self.type_, self.value, self.tb)
2015-04-21 07:32:15.364 1337 TRACE nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 568, in allocate_for_instance
2015-04-21 07:32:15.364 1337 TRACE nova.compute.manager security_group_ids, available_macs, dhcp_opts)
2015-04-21 07:32:15.364 1337 TRACE nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 298, in _create_port
2015-04-21 07:32:15.364 1337 TRACE nova.compute.manager instance=instance.uuid)
2015-04-21 07:32:15.364 1337 TRACE nova.compute.manager PortNotFree: No free port available for instance 5597c25c-287e-420f-89d3-4f5a211471b8.
2015-04-21 07:32:15.364 1337 TRACE nova.compute.manager
2015-04-21 07:32:17.390 1337 ERROR nova.virt.ironic.driver [-] Error preparing deploy for instance 5597c25c-287e-420f-89d3-4f5a211471b8 on baremetal node ccdce9d8-2f6a-4d7f-8c53-f89f289fd0a1.
2015-04-21 07:32:17.573 1337 ERROR nova.compute.manager [-] [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] Instance failed to spawn
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] Traceback (most recent call last):
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2458, in _build_resources
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] yield resources
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2322, in _build_and_run_instance
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] block_device_info=block_device_info)
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] File "/usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py", line 695, in spawn
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] flavor=flavor)
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] six.reraise(self.type_, self.value, self.tb)
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] File "/usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py", line 686, in spawn
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] self._plug_vifs(node, instance, network_info)
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] File "/usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py", line 954, in _plug_vifs
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] network_info_str = str(network_info)
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 480, in __str__
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] return self._sync_wrapper(fn, *args, **kwargs)
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 463, in _sync_wrapper
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] self.wait()
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 495, in wait
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] self[:] = self._gt.wait()
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 175, in wait
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] return self._exit_event.wait()
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 125, in wait
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] current.throw(*self._exc)
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 214, in main
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] result = function(*args, **kwargs)
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1736, in _allocate_network_async
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] dhcp_options=dhcp_options)
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 576, in allocate_for_instance
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] self._delete_ports(neutron, instance, created_port_ids)
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] six.reraise(self.type_, self.value, self.tb)
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 568, in allocate_for_instance
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] security_group_ids, available_macs, dhcp_opts)
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 298, in _create_port
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] instance=instance.uuid)
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] PortNotFree: No free port available for instance 5597c25c-287e-420f-89d3-4f5a211471b8.
2015-04-21 07:32:17.573 1337 TRACE nova.compute.manager [instance: 5597c25c-287e-420f-89d3-4f5a211471b8]
2015-04-21 07:32:17.594 1337 INFO nova.compute.manager [req-9abbf97b-8bf0-495a-850e-7874dbb87a22 2b0fbc7394cf4459867f2957e268e2d2 db9f9ab6aef84239a0206c2bb810b55a - - -] [instance: 5597c25c-287e-420f-89d3-4f5a211471b8] Terminating instance
2015-04-21 07:32:17.659 1337 WARNING nova.virt.ironic.driver [-] Destroy called on non-existing instance 5597c25c-287e-420f-89d3-4f5a211471b8.
2015-04-21 07:32:18.311 1337 ERROR nova.network.neutronv2.api [-] Unable to clear device ID for port 'None'
2015-04-21 07:32:18.311 1337 TRACE nova.network.neutronv2.api Traceback (most recent call last):
2015-04-21 07:32:18.311 1337 TRACE nova.network.neutronv2.api File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 365, in _unbind_ports
2015-04-21 07:32:18.311 1337 TRACE nova.network.neutronv2.api port_client.update_port(port_id, port_req_body)
2015-04-21 07:32:18.311 1337 TRACE nova.network.neutronv2.api File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 99, in with_params
2015-04-21 07:32:18.311 1337 TRACE nova.network.neutronv2.api ret = self.function(instance, *args, **kwargs)
2015-04-21 07:32:18.311 1337 TRACE nova.network.neutronv2.api File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 512, in update_port
2015-04-21 07:32:18.311 1337 TRACE nova.network.neutronv2.api return self.put(self.port_path % (port), body=body)
2015-04-21 07:32:18.311 1337 TRACE nova.network.neutronv2.api File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 299, in put
2015-04-21 07:32:18.311 1337 TRACE nova.network.neutronv2.api headers=headers, params=params)
2015-04-21 07:32:18.311 1337 TRACE nova.network.neutronv2.api File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 267, in retry_request
2015-04-21 07:32:18.311 1337 TRACE nova.network.neutronv2.api headers=headers, params=params)
2015-04-21 07:32:18.311 1337 TRACE nova.network.neutronv2.api File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 208, in do_request
2015-04-21 07:32:18.311 1337 TRACE nova.network.neutronv2.api self._handle_fault_response(status_code, replybody)
2015-04-21 07:32:18.311 1337 TRACE nova.network.neutronv2.api File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 182, in _handle_fault_response
2015-04-21 07:32:18.311 1337 TRACE nova.network.neutronv2.api exception_handler_v20(status_code, des_error_body)
2015-04-21 07:32:18.311 1337 TRACE nova.network.neutronv2.api File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 80, in exception_handler_v20
2015-04-21 07:32:18.311 1337 TRACE nova.network.neutronv2.api message=message)
2015-04-21 07:32:18.311 1337 TRACE nova.network.neutronv2.api NeutronClientException: 404 Not Found
2015-04-21 07:32:18.311 1337 TRACE nova.network.neutronv2.api
2015-04-21 07:32:18.311 1337 TRACE nova.network.neutronv2.api The resource could not be found.
2015-04-21 07:32:18.311 1337 TRACE nova.network.neutronv2.api
2015-04-21 07:32:18.311 1337 TRACE nova.network.neutronv2.api
2015-04-21 07:32:18.311 1337 TRACE nova.network.neutronv2.api
2015-04-21 07:32:18.485 1337 INFO nova.scheduler.client.report [-] Compute_service record updated for ('hbcontrol', u'ccdce9d8-2f6a-4d7f-8c53-f89f289fd0a1')
2015-04-21 07:32:18.580 1337 ERROR nova.network.neutronv2.api [-] Unable to clear device ID for port 'None'
2015-04-21 07:32:18.580 1337 TRACE nova.network.neutronv2.api Traceback (most recent call last):
2015-04-21 07:32:18.580 1337 TRACE nova.network.neutronv2.api File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 365, in _unbind_ports
2015-04-21 07:32:18.580 1337 TRACE nova.network.neutronv2.api port_client.update_port(port_id, port_req_body)
2015-04-21 07:32:18.580 1337 TRACE nova.network.neutronv2.api File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 99, in with_params
2015-04-21 07:32:18.580 1337 TRACE nova.network.neutronv2.api ret = self.function(instance, *args, **kwargs)
2015-04-21 07:32:18.580 1337 TRACE nova.network.neutronv2.api File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 512, in update_port
2015-04-21 07:32:18.580 1337 TRACE nova.network.neutronv2.api return self.put(self.port_path % (port), body=body)
2015-04-21 07:32:18.580 1337 TRACE nova.network.neutronv2.api File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 299, in put
2015-04-21 07:32:18.580 1337 TRACE nova.network.neutronv2.api headers=headers, params=params)
2015-04-21 07:32:18.580 1337 TRACE nova.network.neutronv2.api File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 267, in retry_request
2015-04-21 07:32:18.580 1337 TRACE nova.network.neutronv2.api headers=headers, params=params)
2015-04-21 07:32:18.580 1337 TRACE nova.network.neutronv2.api File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 208, in do_request
2015-04-21 07:32:18.580 1337 TRACE nova.network.neutronv2.api self._handle_fault_response(status_code, replybody)
2015-04-21 07:32:18.580 1337 TRACE nova.network.neutronv2.api File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 182, in _handle_fault_response
2015-04-21 07:32:18.580 1337 TRACE nova.network.neutronv2.api exception_handler_v20(status_code, des_error_body)
2015-04-21 07:32:18.580 1337 TRACE nova.network.neutronv2.api File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 80, in exception_handler_v20
2015-04-21 07:32:18.580 1337 TRACE nova.network.neutronv2.api message=message)
2015-04-21 07:32:18.580 1337 TRACE nova.network.neutronv2.api NeutronClientException: 404 Not Found
2015-04-21 07:32:18.580 1337 TRACE nova.network.neutronv2.api
2015-04-21 07:32:18.580 1337 TRACE nova.network.neutronv2.api The resource could not be found.
2015-04-21 07:32:18.580 1337 TRACE nova.network.neutronv2.api
2015-04-21 07:32:18.580 1337 TRACE nova.network.neutronv2.api
2015-04-21 07:32:18.580 1337 TRACE nova.network.neutronv2.api
^C
[root@hbcontrol ~]#

Revision history for this message
Michael Still (mikal) wrote :

The fix seems pretty clear here -- clean up the resource on a failed spawn.

tags: added: ironic
Changed in nova:
status: New → Triaged
importance: Undecided → Low
Changed in nova:
assignee: nobody → vikas choudhary (choudharyvikas16)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (master)

Fix proposed to branch: master
Review: https://review.openstack.org/208680

Changed in nova:
status: Triaged → In Progress
Changed in nova:
status: In Progress → Fix Committed
Changed in ironic:
assignee: nobody → vikas choudhary (choudharyvikas16)
status: New → Fix Committed
Changed in ironic:
status: Fix Committed → In Progress
Changed in nova:
status: Fix Committed → In Progress
Revision history for this message
Dmitry Tantsur (divius) wrote :

Hi! I think fix for this problem is within the nova driver, so I'm closing the ironic part of this bug. Please let me know if something should be fixed in ironic source code as well.

Changed in ironic:
status: In Progress → Invalid
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on nova (master)

Change abandoned by vikas choudhary (<email address hidden>) on branch: master
Review: https://review.openstack.org/208680
Reason: Will be resubmitted in another review request.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (master)

Fix proposed to branch: master
Review: https://review.openstack.org/212961

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on nova (master)

Change abandoned by vikas choudhary (<email address hidden>) on branch: master
Review: https://review.openstack.org/212961
Reason: Reason:
will be recommitting.

Revision history for this message
vikas choudhary (choudharyvikas16) wrote :

Balazs Gibizer 2:19 PM
"I cannot really comment form ironic point of view. However I think you create a race between _query_driver_power_state_and_sync (which is called from the periodic task _sync_power_states) and _run_pending_deletes (see https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L6347) which also a periodic task. The later gets deleted but not cleaned instances from the db and cleans up the instance disk while the former sets the cleaned flag without cleaning the instance disk. Therefore I think if the former runs first then the instance disk will not be deleted and we will leak resources."

Thanks Gibi. You are right. I was doing mistake.Actually libvirt driver and ironic driver does not behave exactly same.In case of a deploy failure, ironic driver do clean up hypervisor but thats not the case with libvirt driver. https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L2490 https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L796
Fix should be related to ironic driver only.I will be re-submitting changes soon.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (master)

Fix proposed to branch: master
Review: https://review.openstack.org/213692

Revision history for this message
vikas choudhary (choudharyvikas16) wrote :

@Dmitry:

Hi,

This bug is related to ironic driver only.In deploy fail on callback-timeout, destroy is called to free resources in ironic driver but not in libvirt.
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L2490 https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L796

Whenever virt driver is freeing resources and calling destroy, it should update nova db instance info also for the same.Looks like this bug is valid in ironic.
Please correct me if i am wrong.

Matt Riedemann (mriedem)
tags: added: baremetal
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on nova (master)

Change abandoned by vikas choudhary (<email address hidden>) on branch: master
Review: https://review.openstack.org/213692
Reason: This bug will get fixed with https://bugs.launchpad.net/nova/+bug/1427944.

Changed in nova:
assignee: vikas choudhary (choudharyvikas16) → nobody
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.