instances rescheduled after building network info do not update the MAC

Bug #1342919 reported by Robert Collins
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Ironic
Fix Released
High
Chris Behrens
OpenStack Compute (nova)
Fix Released
High
Chris Behrens

Bug Description

This is weird - Ironic has used the mac from a different node (which quite naturally leads to failures to boot!)

nova list | grep spawn
| 6c364f0f-d4a0-44eb-ae37-e012bbdd368c | ci-overcloud-NovaCompute3-zmkjp5aa6vgf | BUILD | spawning | NOSTATE | ctlplane=10.10.16.137 |

 nova show 6c364f0f-d4a0-44eb-ae37-e012bbdd368c | grep hyperv
 | OS-EXT-SRV-ATTR:hypervisor_hostname | b07295ee-1c09-484c-9447-10b9efee340c |

 neutron port-list | grep 137
 | 272f2413-0309-4e8b-9a6d-9cb6fdbe978d | | 78:e7:d1:23:90:0d | {"subnet_id": "a6ddb35e-305e-40f1-9450-7befc8e1af47", "ip_address": "10.10.16.137"} |

ironic node-show b07295ee-1c09-484c-9447-10b9efee340c | grep wait
 | provision_state | wait call-back |

ironic port-list | grep 78:e7:d1:23:90:0d # from neutron
| 33ab97c0-3de9-458a-afb7-8252a981b37a | 78:e7:d1:23:90:0d |

ironic port-show 33ab97c0-3de9-458a-afb7-8252a981
+------------+-----------------------------------------------------------+
| Property | Value |
+------------+-----------------------------------------------------------+
| node_uuid | 69dc8c40-dd79-4ed6-83a9-374dcb18c39b | # Ruh-roh, wrong node!
| uuid | 33ab97c0-3de9-458a-afb7-8252a981b37a |
| extra | {u'vif_port_id': u'aad5ee6b-52a3-4f8b-8029-7b8f40e7b54e'} |
| created_at | 2014-07-08T23:09:16+00:00 |
| updated_at | 2014-07-16T01:23:23+00:00 |
| address | 78:e7:d1:23:90:0d |
+------------+-----------------------------------------------------------+

ironic port-list | grep 78:e7:d1:23:9b:1d # This is the MAC my hardware list says the node should have
| caba5b36-f518-43f2-84ed-0bc516cc89df | 78:e7:d1:23:9b:1d |
# ironic port-show caba5b36-f518-43f2-84ed-0bc516cc
+------------+-----------------------------------------------------------+
| Property | Value |
+------------+-----------------------------------------------------------+
| node_uuid | b07295ee-1c09-484c-9447-10b9efee340c | # and tada right node
| uuid | caba5b36-f518-43f2-84ed-0bc516cc89df |
| extra | {u'vif_port_id': u'272f2413-0309-4e8b-9a6d-9cb6fdbe978d'} |
| created_at | 2014-07-08T23:08:26+00:00 |
| updated_at | 2014-07-16T19:07:56+00:00 |
| address | 78:e7:d1:23:9b:1d |
+------------+-----------------------------------------------------------+

Tags: ironic
Revision history for this message
Robert Collins (lifeless) wrote :

ironic node-port-list b07295ee-1c09-484c-9447-10b
+--------------------------------------+-------------------+
| uuid | address |
+--------------------------------------+-------------------+
| caba5b36-f518-43f2-84ed-0bc516cc89df | 78:e7:d1:23:9b:1d |
+--------------------------------------+-------------------+

Revision history for this message
Robert Collins (lifeless) wrote :

From the nova log - note that this is the wrong port 33ab97c0-3de9-458a-afb7-8252a981 vs caba5b36-f518-43f2-84ed-0bc516cc
2014-07-16 18:59:25.412 8189 DEBUG ironicclient.common.http [req-2a5adbcb-86ec-4ce0-a023-0618306b85eb ]
HTTP/1.0 200 OK
date: Wed, 16 Jul 2014 18:59:25 GMT
content-length: 301
content-type: application/json; charset=UTF-8
server: WSGIServer/0.1 Python/2.7.6

{"ports": [{"uuid": "33ab97c0-3de9-458a-afb7-8252a981b37a", "links": [{"href": "http://138.35.77.4:6385/v1/ports/33ab97c0-3de9-458a-afb7-8252a981b37a", "rel": "self"}, {"href": "http://138.35.77.4:6385/ports/33ab97c0-3de9-458a-afb7-8252a981b37a", "rel": "bookmark"}], "address": "78:e7:d1:23:90:0d"}]}

Revision history for this message
Robert Collins (lifeless) wrote :

This is the request that got the wrong port:

2014-07-16 18:59:24.419 8189 DEBUG ironicclient.common.http [req-2a5adbcb-86ec-4ce0-a023-0618306b85eb ] curl -i -X GET -H 'X-Auth-Token: _-..._v-MNNg==' -H 'Content-Type: application/json' -H 'Accept: application/json' -H 'User-Agent: python-ironicclient' http://138.35.77.4:6385//v1/nodes/69dc8c40-dd79-4ed6-83a9-374dcb18c39b/ports log_curl_request /opt/stack/venvs/nova/local/lib/python2.7/site-packages/ironicclient/common/http.py:97

Note that the node is wrong: 69dc8c40-dd79-4ed6-83a9-374dcb18c39b is indeed the node with port 33ab97c0-3de9-458a-afb7-8252a981b37a but not the node that is waiting for callback.

Revision history for this message
Robert Collins (lifeless) wrote :

Request before that was for http://138.35.77.4:6385//v1/nodes/69dc8c40-dd79-4ed6-83a9-374dcb18c39b - so consistent - the API is looking ok at the moent.

Revision history for this message
Robert Collins (lifeless) wrote :

Interestingly - 2014-07-16 18:59:09.920 8189 DEBUG nova.compute.manager [req-2a5adbcb-86ec-4ce0-a023-0618306b85eb None] [instance: 6c364f0f-d4a0-44eb-ae37-e012bbdd368c] Build of instance 6c364f0f-d4a0-44eb-ae37-e012bbdd368c was re-scheduled: Insufficient compute resources: Free memory 0.00 MB < requested 98304 MB. do_build_and_run_instance /opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py:1896

could be coincidence

Revision history for this message
Robert Collins (lifeless) wrote :

Later on:
2014-07-16 19:07:49.938 8189 DEBUG ironicclient.common.http [req-2a5adbcb-86ec-4ce0-a023-0618306b85eb ] curl -i -X GET -H 'X-Auth-Token: PKIZ_eJytWFtzqkoTfZ9f8b2ndh0Y1BLQ58' -H 'Content-Type: application/json' -H 'Accept: application/json' -H 'User-Agent: python-ironicclient' http://138.35.77.4:6385//v1/nodes/b07295ee-1c09-484c-9447-10b9efee340c/ports log_curl_request /opt/stack/venvs/nova/local/lib/python2.7/site-packages/ironicclient/common/http.py:97

thats the expected node from when I captured the issue and
2014-07-16 19:07:51.050 8189 DEBUG ironicclient.common.http [req-2a5adbcb-86ec-4ce0-a023-0618306b85eb ]
HTTP/1.0 200 OK
date: Wed, 16 Jul 2014 19:07:51 GMT
content-length: 301
content-type: application/json; charset=UTF-8
server: WSGIServer/0.1 Python/2.7.6

{"ports": [{"uuid": "caba5b36-f518-43f2-84ed-0bc516cc89df", "links": [{"href": "http://138.35.77.4:6385/v1/ports/caba5b36-f518-43f2-84ed-0bc516cc89df", "rel": "self"}, {"href": "http://138.35.77.4:6385/ports/caba5b36-f518-43f2-84ed-0bc516cc89df", "rel": "bookmark"}], "address": "78:e7:d1:23:9b:1d"}]}

is correctly returned.

So its looking like the neutron port somehow has the mac address for the prior node

Revision history for this message
Robert Collins (lifeless) wrote :
Download full text (7.2 KiB)

claims timeline:

18:59:09.918 failed
18:59:17.549 Claim successful
19:06:58.036 Aborting claim (releasing it)
19:07:34.752 Claim successful
19:39:53.854 Aborting claim (releasing it)

grep -n claims.*req-2a5adbcb-86ec-4ce0-a023-0618306b85e nova-compute.log /dev/null 2>&1| tee /tmp/vpHznXX/1
nova-compute.log:1796639:2014-07-16 18:59:09.918 8189 AUDIT nova.compute.claims [req-2a5adbcb-86ec-4ce0-a023-0618306b85eb None] [instance: 6c364f0f-d4a0-44eb-ae37-e012bbdd368c] Attempting claim: memory 98304 MB, disk 1600 GB, VCPUs 24
nova-compute.log:1796640:2014-07-16 18:59:09.918 8189 AUDIT nova.compute.claims [req-2a5adbcb-86ec-4ce0-a023-0618306b85eb None] [instance: 6c364f0f-d4a0-44eb-ae37-e012bbdd368c] Total memory: 98304 MB, used: 98304.00 MB
nova-compute.log:1796641:2014-07-16 18:59:09.918 8189 AUDIT nova.compute.claims [req-2a5adbcb-86ec-4ce0-a023-0618306b85eb None] [instance: 6c364f0f-d4a0-44eb-ae37-e012bbdd368c] memory limit: 98304.00 MB, free: 0.00 MB
nova-compute.log:1796642:2014-07-16 18:59:09.918 8189 AUDIT nova.compute.claims [req-2a5adbcb-86ec-4ce0-a023-0618306b85eb None] [instance: 6c364f0f-d4a0-44eb-ae37-e012bbdd368c] Total disk: 1600 GB, used: 1600.00 GB
nova-compute.log:1796643:2014-07-16 18:59:09.919 8189 AUDIT nova.compute.claims [req-2a5adbcb-86ec-4ce0-a023-0618306b85eb None] [instance: 6c364f0f-d4a0-44eb-ae37-e012bbdd368c] disk limit not specified, defaulting to unlimited
nova-compute.log:1796644:2014-07-16 18:59:09.919 8189 AUDIT nova.compute.claims [req-2a5adbcb-86ec-4ce0-a023-0618306b85eb None] [instance: 6c364f0f-d4a0-44eb-ae37-e012bbdd368c] Total CPUs: 24 VCPUs, used: 24.00 VCPUs
nova-compute.log:1796645:2014-07-16 18:59:09.919 8189 AUDIT nova.compute.claims [req-2a5adbcb-86ec-4ce0-a023-0618306b85eb None] [instance: 6c364f0f-d4a0-44eb-ae37-e012bbdd368c] CPUs limit not specified, defaulting to unlimited
2014-07-16 18:59:09.919 8189 DEBUG nova.openstack.common.lockutils [req-2a5adbcb-86ec-4ce0-a023-0618306b85eb None] Semaphore / lock released "instance_claim" inner /opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/lockutils.py:328
2014-07-16 18:59:09.919 8189 DEBUG nova.compute.manager [req-2a5adbcb-86ec-4ce0-a023-0618306b85eb None] [instance: 6c364f0f-d4a0-44eb-ae37-e012bbdd368c] Insufficient compute resources: Free memory 0.00 MB < requested 98304 MB. _build_and_run_instance /opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py:1981
2014-07-16 18:59:09.920 8189 DEBUG nova.compute.utils [req-2a5adbcb-86ec-4ce0-a023-0618306b85eb None] [instance: 6c364f0f-d4a0-44eb-ae37-e012bbdd368c] Insufficient compute resources: Free memory 0.00 MB < requested 98304 MB. notify_about_instance_usage /opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/utils.py:291
2014-07-16 18:59:09.920 8189 DEBUG nova.compute.manager [req-2a5adbcb-86ec-4ce0-a023-0618306b85eb None] [instance: 6c364f0f-d4a0-44eb-ae37-e012bbdd368c] Build of instance 6c364f0f-d4a0-44eb-ae37-e012bbdd368c was re-scheduled: Insufficient compute resources: Free memory 0.00 MB < requested 98304 MB. do_build_and_run_instance /opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py:1896
********...

Read more...

Revision history for this message
Robert Collins (lifeless) wrote :

The 7m error:
19:06:56.721 GET http://138.35.77.4:6385//v1/nodes/?instance_uuid=6c364f0f-d4a0-44eb-ae37-e012bbdd368c
19:06:58.033 {"nodes": []}
2014-07-16 19:06:58.034 8189 WARNING ironic.nova.virt.ironic.driver [req-2a5adbcb-86ec-4ce0-a023-0618306b85eb None] Destroy called on non-existing instance 6c364f0f-d4a0-44eb-ae37-e012bbdd368c.

Revision history for this message
Robert Collins (lifeless) wrote :

So we now have a timeline:
The instance was raced with the scheduler once
Then landed and claimed successfully, and built an instance on node 69dc8c40-dd79-4ed6-83a9-374dcb18c39b
After polling for 7m the instance_uuid went awol in Ironic
The instance then landed and claimed successfully on node b07295ee-1c09-484c-9447-10b9efee340c with the wrong network UUID.

I think this is enough information to mark this high or perhaps even critical.

Changed in ironic:
status: New → Triaged
importance: Undecided → High
summary: - ironic registered wrong MAC with neutron
+ instances rescheduled after building network info do not update the MAC
Revision history for this message
Robert Collins (lifeless) wrote :

So, there are two branches here. One appears to be a race with something that unsets instance_uuid (it wasn't an admin).
The other is a bug when *any* failure post-claim&network generation occurs where the rescheduled instance has the wrong MAC.

This is problematic for two reasons:
 a) it doesn't work
 b) another instance scheduled onto that node will be unable to create its ports in neutron due to the mac being used already, leading to cascade failures.

I think we need to make sure we remove the network allocation on reschedule, which means this may be a Nova bug.

Revision history for this message
Robert Collins (lifeless) wrote :

Looking at the nova code, it only throws it away if the error is
        except (exception.InstanceNotFound,
                exception.UnexpectedDeletingTaskStateError):

and not on the other code paths. That makes this a nova or nova-ironic-driver-interaction bug: adding a nova task.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (master)

Fix proposed to branch: master
Review: https://review.openstack.org/107511

Changed in nova:
assignee: nobody → Robert Collins (lifeless)
status: New → In Progress
Changed in nova:
assignee: Robert Collins (lifeless) → Chris Krelle (nobodycam)
Changed in nova:
importance: Undecided → High
tags: added: ironic
no longer affects: ironic
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (master)

Reviewed: https://review.openstack.org/107511
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=963ad71af4750e28745b6de262da11816b403801
Submitter: Jenkins
Branch: master

commit 963ad71af4750e28745b6de262da11816b403801
Author: Robert Collins <email address hidden>
Date: Thu Jul 17 09:44:00 2014 +1200

    Deallocate the network if rescheduling for Ironic

    Ironic (and any other driver that limits the allowed MAC addresses
    on an instance will fail if the rescheduled instance runs on a node
    that has a different constraint for MAC addresses. To deal with this
    we deallocate the network (rather than e.g. trying to lazy update it)
    because Nova can't know what the implications of leaving the network
    allocated are *when MAC limits are in place* - and we expect them to
    be in place when external systems are constrained, so being
    conservative is a good idea.

    However the code to trigger this looked at dhcp options (which drivers
    can safely update pre-boot) not MAC addresses (which can cause conflicts
    in Neutron if left in-place).

    Change-Id: I59e748db7a943d5a36d75ef63b2a1ec458c58937
    Closes-Bug: #1342919

Changed in nova:
status: In Progress → Fix Committed
Revision history for this message
John Stafford (john-stafford) wrote :

Is this issue associated with the scheduler race issue currently being experienced?

Revision history for this message
Robert Collins (lifeless) wrote : Re: [Bug 1342919] Re: instances rescheduled after building network info do not update the MAC

No this isn't a race its is a simple bug. The race increases how often this
is encountered.

Revision history for this message
John Stafford (john-stafford) wrote :
Download full text (3.7 KiB)

Okay, thank you for the clarification.

Cheers!
John Stafford
Program Manager | HP Helion Cloud
E: <email address hidden> | V: 360.212.9720 | M: 206.963.0916

-----Original Message-----
From: <email address hidden> [mailto:<email address hidden>] On Behalf Of Robert Collins
Sent: Friday, July 18, 2014 1:02 PM
To: Stafford, John Richard
Subject: Re: [Bug 1342919] Re: instances rescheduled after building network info do not update the MAC

No this isn't a race its is a simple bug. The race increases how often this is encountered.

--
You received this bug notification because you are subscribed to the bug report.
https://bugs.launchpad.net/bugs/1342919

Title:
  instances rescheduled after building network info do not update the
  MAC

Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  This is weird - Ironic has used the mac from a different node (which
  quite naturally leads to failures to boot!)

  nova list | grep spawn
  | 6c364f0f-d4a0-44eb-ae37-e012bbdd368c | ci-overcloud-NovaCompute3-zmkjp5aa6vgf | BUILD | spawning | NOSTATE | ctlplane=10.10.16.137 |

   nova show 6c364f0f-d4a0-44eb-ae37-e012bbdd368c | grep hyperv
   | OS-EXT-SRV-ATTR:hypervisor_hostname | b07295ee-1c09-484c-9447-10b9efee340c |

   neutron port-list | grep 137
   | 272f2413-0309-4e8b-9a6d-9cb6fdbe978d | | 78:e7:d1:23:90:0d | {"subnet_id": "a6ddb35e-305e-40f1-9450-7befc8e1af47", "ip_address": "10.10.16.137"} |

  ironic node-show b07295ee-1c09-484c-9447-10b9efee340c | grep wait
   | provision_state | wait call-back |

  ironic port-list | grep 78:e7:d1:23:90:0d # from neutron
  | 33ab97c0-3de9-458a-afb7-8252a981b37a | 78:e7:d1:23:90:0d |

  ironic port-show 33ab97c0-3de9-458a-afb7-8252a981
  +------------+-----------------------------------------------------------+
  | Property | Value |
  +------------+-----------------------------------------------------------+
  | node_uuid | 69dc8c40-dd79-4ed6-83a9-374dcb18c39b | # Ruh-roh, wrong node!
  | uuid | 33ab97c0-3de9-458a-afb7-8252a981b37a |
  | extra | {u'vif_port_id': u'aad5ee6b-52a3-4f8b-8029-7b8f40e7b54e'} |
  | created_at | 2014-07-08T23:09:16+00:00 |
  | updated_at | 2014-07-16T01:23:23+00:00 |
  | address | 78:e7:d1:23:90:0d |
  +------------+-----------------------------------------------------------+

  ironic port-list | grep 78:e7:d1:23:9b:1d # This is the MAC my hardware list says the node should have
  | caba5b36-f518-43f2-84ed-0bc516cc89df | 78:e7:d1:23:9b:1d |
  # ironic port-show caba5b36-f518-43f2-84ed-0bc516cc
  +------------+-----------------------------------------------------------+
  | Property | Value |
  +------------+-----------------------------------------------------------+
  | node_uuid | b07295ee-1c09-484c-9447-10b9efee340c ...

Read more...

Changed in nova:
status: Fix Committed → Triaged
Revision history for this message
Robert Collins (lifeless) wrote :

Sadly this fix wasn't quite paranoid enough:
2014-07-19 23:09:37.469 6962 TRACE nova.compute.manager [instance: a371f81d-28d7-4936-b5b9-38de1b158bd0] Traceback (most recent call last):
2014-07-19 23:09:37.469 6962 TRACE nova.compute.manager [instance: a371f81d-28d7-4936-b5b9-38de1b158bd0] File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py", line 1917, in do_build_and_run_instance
2014-07-19 23:09:37.469 6962 TRACE nova.compute.manager [instance: a371f81d-28d7-4936-b5b9-38de1b158bd0] if self.driver.macs_for_instance(instance):
2014-07-19 23:09:37.469 6962 TRACE nova.compute.manager [instance: a371f81d-28d7-4936-b5b9-38de1b158bd0] File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/ironic/nova/virt/ironic/driver.py", line 462, in macs_for_instance
2014-07-19 23:09:37.469 6962 TRACE nova.compute.manager [instance: a371f81d-28d7-4936-b5b9-38de1b158bd0] ports = icli.call("node.list_ports", node.uuid)
2014-07-19 23:09:37.469 6962 TRACE nova.compute.manager [instance: a371f81d-28d7-4936-b5b9-38de1b158bd0] File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/ironicclient/openstack/common/apiclient/base.py", line 466, in __getattr__
2014-07-19 23:09:37.469 6962 TRACE nova.compute.manager [instance: a371f81d-28d7-4936-b5b9-38de1b158bd0] raise AttributeError(k)
2014-07-19 23:09:37.469 6962 TRACE nova.compute.manager [instance: a371f81d-28d7-4936-b5b9-38de1b158bd0] AttributeError: uuid

I believe instance['node'] is being set to None in one of the trigger-reschedule codepaths that led to the call to macs_for_instance. The symptom that shows up is instances in state BUILD with no task - like:
http://logs.openstack.org/90/108190/1/check-tripleo/check-tripleo-overcloud-f20/59a563e/console.html

2014-07-21 23:21:25.834 | +--------------------------------------+-------------------------------------+--------+------------+-------------+--------------------+
2014-07-21 23:21:25.835 | | ID | Name | Status | Task State | Power State | Networks |
2014-07-21 23:21:25.835 | +--------------------------------------+-------------------------------------+--------+------------+-------------+--------------------+
2014-07-21 23:21:25.835 | | 5f15aefc-fdd4-4bba-83a0-0e00c484f62d | overcloud-NovaCompute0-zcia4qlu2737 | ACTIVE | - | Running | ctlplane=192.0.2.5 |
2014-07-21 23:21:25.835 | | 31ba87a6-93b7-414c-8ff6-aea50127dc49 | overcloud-NovaCompute1-ojxfdgrekzoq | BUILD | - | NOSTATE | |
2014-07-21 23:21:25.835 | | 1640d0fb-1f31-40ab-8a02-c536b8882a46 | overcloud-controller0-gvwpcte3nqxl | ACTIVE | - | Running | ctlplane=192.0.2.4 |
2014-07-21 23:21:25.874 | +--------------------------------------+-------------------------------------+--------+------------+-------------+--------------------+

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (master)

Fix proposed to branch: master
Review: https://review.openstack.org/108640

Changed in nova:
assignee: Chris Krelle (nobodycam) → Robert Collins (lifeless)
status: Triaged → In Progress
Revision history for this message
Chris Behrens (cbehrens) wrote :

I'm proposing a new virt driver method for nova that allows the virt driver to say whether reschedules should deallocate networks. Once the nova side is confirmed, we'll add the method to ironic's virt driver.

Changed in nova:
assignee: Robert Collins (lifeless) → Chris Behrens (cbehrens)
Changed in ironic:
assignee: nobody → Chris Behrens (cbehrens)
status: New → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (master)

Reviewed: https://review.openstack.org/108640
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=d3854f2c05502c7407d716da7d1bd7f1ca3e0a15
Submitter: Jenkins
Branch: master

commit d3854f2c05502c7407d716da7d1bd7f1ca3e0a15
Author: Robert Collins <email address hidden>
Date: Fri Jul 18 18:45:30 2014 +1200

    Add method for deallocating networks on reschedule

    The original call to dhcp_options_for_instance() is not a great way to
    check whether deallocation of networks should happen on reschedules.

    This adds a new virt driver method 'deallocate_networks_on_reschedule'
    which can be used by a virt driver to say whether reschedules should
    deallocate networks first. This defaults to False and modifies the
    baremetal virt driver to return True. Ironic virt driver will also need
    to return True.

    Closes-Bug: 1342919
    Change-Id: I54a3252ab15e2d8b596ccc90eb4755405021f1da

Changed in nova:
status: In Progress → Fix Committed
Dmitry Tantsur (divius)
Changed in ironic:
importance: Undecided → High
Thierry Carrez (ttx)
Changed in nova:
milestone: none → juno-3
status: Fix Committed → Fix Released
Revision history for this message
aeva black (tenbrae) wrote :

With the Ironic virt driver now in Nova, I believe this no longer affects Ironic, and should be re-opened for Nova.

Going to leave it tagged to Ironic, however, so we can track it for the time being.

Changed in ironic:
milestone: none → juno-rc1
Revision history for this message
Dmitry Tantsur (divius) wrote :

Looks like this bug is fixed

Changed in ironic:
status: In Progress → Fix Committed
Thierry Carrez (ttx)
Changed in ironic:
status: Fix Committed → Fix Released
Thierry Carrez (ttx)
Changed in nova:
milestone: juno-3 → 2014.2
Thierry Carrez (ttx)
Changed in ironic:
milestone: juno-rc1 → 2014.2
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.