rebuild fails with ilo* drivers
Bug #1435959 reported by
Ramakrishnan G (rameshg87)
This bug affects 1 person
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Ironic |
Fix Released
|
High
|
Chris Krelle |
Bug Description
Instance rebuild fails with iscsi_ilo and agent_ilo drivers. After the node is set to reboot with the deploy ramdisk, the node boots from the instance again and never boots to deploy ramdisk. Eventually the rebuild fails with the timeout on staying at wait-call-back state.
Changed in ironic: | |
assignee: | nobody → Ramakrishnan G (rameshg87) |
Changed in ironic: | |
importance: | Undecided → High |
milestone: | none → kilo-rc1 |
tags: | added: ilo |
Changed in ironic: | |
status: | New → Triaged |
Changed in ironic: | |
assignee: | Ramakrishnan G (rameshg87) → Chris Krelle (nobodycam) |
Changed in ironic: | |
status: | Fix Committed → Fix Released |
Changed in ironic: | |
milestone: | kilo-rc1 → 2015.1.0 |
To post a comment you must log in.
This is because of ilo_boot_iso (if it exists in instance_info) is always attached to the node while it is powered on . On a rebuild, the ilo_boot_iso exists on the node (which is the boot iso of the previous deloyed image). Hence when the deploy driver tries to power o n the node after attaching deploy_iso - the instance boot_iso gets re-attached again and causes the failure.
https:/ /github. com/openstack/ ironic/ blob/master/ ironic/ drivers/ modules/ ilo/power. py#L64- L66
We should be attaching the boot_iso only if the instance is active.