can not deploy a partition image to Ironic node

Bug #1575661 reported by Pavlo Shchelokovskyy
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Ironic
Invalid
Undecided
Unassigned
OpenStack Compute (nova)
Invalid
Undecided
Unassigned

Bug Description

Using fresh master of DevStack, I can not deploy partition images to Ironic nodes via Nova.

I have two images in Glance - kernel image and partition image with kernel_id property set.

I have configured Ironic nodes and nova flavor with capabilities: "boot_option: local" as described in [0].

When I try to boot nova instance with the partition image and the configured flavor, instance goes to error:

$openstack server list
+--------------------------------------+--------+--------+----------+
| ID | Name | Status | Networks |
+--------------------------------------+--------+--------+----------+
| 6cde85d2-47ad-446b-9a1f-960dbcca5199 | parted | ERROR | |
+--------------------------------------+--------+--------+----------+

Instance is assigned to Ironic node but node is not moved to deploying state

$openstack baremetal list
+--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+
| 95d3353f-61a6-44ba-8485-2881d1138ce1 | node-0 | None | power off | available | False |
| 48112a56-8f8b-42fc-b143-742cf4856e78 | node-1 | 6cde85d2-47ad-446b-9a1f-960dbcca5199 | power off | available | False |
| c66a1035-5edf-434b-9d09-39ecc9069e02 | node-2 | None | power off | available | False |
+--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+

In n-cpu.log I see the following errors:

2016-04-27 15:26:13.190 ERROR ironicclient.common.http [req-077efca4-1776-443b-bd70-0769c09a0e54 demo demo] Error contacting Ironic server: Instance 6cde85d2-47ad-446b-9a1f-960dbcca5199 is already associated with
 a node, it cannot be associated with this other node c66a1035-5edf-434b-9d09-39ecc9069e02 (HTTP 409). Attempt 2 of 2
2016-04-27 15:26:13.190 ERROR nova.compute.manager [req-077efca4-1776-443b-bd70-0769c09a0e54 demo demo] [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199] Instance failed to spawn
2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199] Traceback (most recent call last):
2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199] File "/opt/stack/nova/nova/compute/manager.py", line 2209, in _build_resources
2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199] yield resources
2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199] File "/opt/stack/nova/nova/compute/manager.py", line 2055, in _build_and_run_instance
2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199] block_device_info=block_device_info)
2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199] File "/opt/stack/nova/nova/virt/ironic/driver.py", line 698, in spawn
2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199] self._add_driver_fields(node, instance, image_meta, flavor)
2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199] File "/opt/stack/nova/nova/virt/ironic/driver.py", line 366, in _add_driver_fields
2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199] retry_on_conflict=False)
2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199] File "/opt/stack/nova/nova/virt/ironic/client_wrapper.py", line 139, in call
2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199] return self._multi_getattr(client, method)(*args, **kwargs)
2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199] File "/opt/stack/python-ironicclient/ironicclient/v1/node.py", line 198, in update
2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199] method=http_method)
2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199] File "/opt/stack/python-ironicclient/ironicclient/common/base.py", line 171, in _update
2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199] resp, body = self.api.json_request(method, url, body=patch)
2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199] File "/opt/stack/python-ironicclient/ironicclient/common/http.py", line 552, in json_request
2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199] resp = self._http_request(url, method, **kwargs)
2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199] File "/opt/stack/python-ironicclient/ironicclient/common/http.py", line 189, in wrapper
2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199] return func(self, url, method, **kwargs)
2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199] File "/opt/stack/python-ironicclient/ironicclient/common/http.py", line 534, in _http_request
2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199] error_json.get('debuginfo'), method, url)
2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199] Conflict: Instance 6cde85d2-47ad-446b-9a1f-960dbcca5199 is already associated with a node, it cannot be associat
ed with this other node c66a1035-5edf-434b-9d09-39ecc9069e02 (HTTP 409)
2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199]

In ir-cond.log the error is as follows:

2016-04-27 15:26:13.183 ERROR oslo_messaging.rpc.dispatcher [req-ec2f0a30-13a8-4029-ac0b-e2852c0c67c9 None None] Exception during message handling: Instance 6cde85d2-47ad-446b-9a1f-960dbcca5199 is already associa
ted with a node, it cannot be associated with this other node c66a1035-5edf-434b-9d09-39ecc9069e02
2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher Traceback (most recent call last):
2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 138, in _dispatch_and_reply
2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher incoming.message))
2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 185, in _dispatch
2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args)
2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 127, in _do_dispatch
2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher result = func(ctxt, **new_args)
2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 150, in inner
2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher return func(*args, **kwargs)
2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher File "/opt/stack/ironic/ironic/conductor/manager.py", line 228, in update_node
2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher node_obj.save()
2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher File "/opt/stack/ironic/ironic/objects/node.py", line 340, in save
2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher self.dbapi.update_node(self.uuid, updates)
2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher File "/opt/stack/ironic/ironic/db/sqlalchemy/api.py", line 399, in update_node
2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher node=node_id)
2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher InstanceAssociated: Instance 6cde85d2-47ad-446b-9a1f-960dbcca5199 is already associated with a node, it cannot be associated with this other node c66a10
35-5edf-434b-9d09-39ecc9069e02

What's more, when I delete the failed server from Nova, Ironic node is left with orphaned instance assignment, which only can be deleted with node-update removing instance_uuid.

[0] http://docs.openstack.org/developer/ironic/deploy/install-guide.html?highlight=local%20boot#enabling-local-boot-with-compute-service

Tags: ironic
Revision history for this message
Dmitry Tantsur (divius) wrote :

Hi!

Please try to find a specific error in the Ironic logs.

Double assignment of nodes is an issue in Nova indeed, and it is worked on FWIW.

Changed in ironic:
status: New → Incomplete
Revision history for this message
Sean Dague (sdague) wrote :

This is unclear that it is actually a Nova bug, as it's getting an error back from Ironic. Putting to incomplete for now.

Changed in nova:
status: New → Incomplete
Revision history for this message
Lucas Alvares Gomes (lucasagomes) wrote :

Hi,

Thanks for reporting the bug. Some points:

* Leaving the orphan nodes behind. We have a bug about it: https://bugs.launchpad.net/nova/+bug/1477490

* What driver was being used?

* I see that you are requesting local boot, this requires the tenant image to have grub2 installed in it, which probably is not the case for cirros.

(also, please post the ironic logs as requested by others)

Revision history for this message
Pavlo Shchelokovskyy (pshchelo) wrote :

Lucas,

I was using the agent_ipmitool driver in Ironic, and ubuntu images.

Anyway, I can no longer reproduce this bug on latest master, so please close as invalid. Feel free to reopen if it resurfaces.

Changed in ironic:
status: Incomplete → Invalid
Changed in nova:
status: Incomplete → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.