Volume detach is broken when using volume-update first

Bug #1625660 reported by Sylvain Bauza
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Fix Released
High
Sylvain Bauza
Newton
Fix Committed
High
Matt Riedemann

Bug Description

https://bugs.launchpad.net/nova/+bug/1490236 was focusing on correcting the volume-update API so that it was idempotent. Unfortunately, the merged solution is introducing a huge regression by incorrectly providing the old volume ID as the new attachment information.

https://github.com/openstack/nova/commit/be553fb15591c6fc212ef3a07c1dd1cbc43d6866

Consequently, it's now impossible for an end-user to detach a volume if some operator updated the BDM to point to a different volume.

Evidence here: http://paste.openstack.org/show/582248/

What's unfortunate is that the original bug is about to be worked around by just detaching/attaching the volume to the instance before swapping back...

tags: added: volumes
Revision history for this message
Sylvain Bauza (sylvain-bauza) wrote :

I marked the bug as High, because compared to the original bug, it impacts an end-user API (attach and delete) while the volume-update API is only an admin thing.

description: updated
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (master)

Fix proposed to branch: master
Review: https://review.openstack.org/373390

Changed in nova:
status: Confirmed → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (master)

Reviewed: https://review.openstack.org/373390
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=ee2c0a00db9d6006fb7c3a07ee252d4ca4d73eff
Submitter: Jenkins
Branch: master

commit ee2c0a00db9d6006fb7c3a07ee252d4ca4d73eff
Author: Sylvain Bauza <email address hidden>
Date: Tue Sep 20 16:50:16 2016 +0200

    Revert "Set 'serial' to new volume ID in swap volumes"

    The below commit introduced a regression by updating the wrong value to
    the attachment field when calling the volume-update swap operation where
    the value is now the previous volume ID instead of the current volume
    ID. It litterally makes it now imposible to detech the volume from the
    instance once it has been swapped (even after the first swap).

    Given the original issue can be worked around by detaching and then
    attaching the volume before swapping back to the original volume, and
    because the original bug only impacts an admin API while here it impacts
    a user API, it's preferrable to directly revert the regression and then
    work on the next cycle about the root problem rather than leaving the
    change and try to fix something which is hard to troubleshoot, also
    given the lack of functional tests around the volume operations.

    This reverts commit be553fb15591c6fc212ef3a07c1dd1cbc43d6866.

    Change-Id: Ibad1afa5860d611e0e0ea0ba5e7dc98ae8f07190
    Closes-Bug: #1625660

Changed in nova:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (stable/newton)

Fix proposed to branch: stable/newton
Review: https://review.openstack.org/374324

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (stable/newton)

Reviewed: https://review.openstack.org/374324
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=eae8775f874bd3ad44febc4a66984f17540e8b6f
Submitter: Jenkins
Branch: stable/newton

commit eae8775f874bd3ad44febc4a66984f17540e8b6f
Author: Sylvain Bauza <email address hidden>
Date: Tue Sep 20 16:50:16 2016 +0200

    Revert "Set 'serial' to new volume ID in swap volumes"

    The below commit introduced a regression by updating the wrong value to
    the attachment field when calling the volume-update swap operation where
    the value is now the previous volume ID instead of the current volume
    ID. It litterally makes it now imposible to detech the volume from the
    instance once it has been swapped (even after the first swap).

    Given the original issue can be worked around by detaching and then
    attaching the volume before swapping back to the original volume, and
    because the original bug only impacts an admin API while here it impacts
    a user API, it's preferrable to directly revert the regression and then
    work on the next cycle about the root problem rather than leaving the
    change and try to fix something which is hard to troubleshoot, also
    given the lack of functional tests around the volume operations.

    This reverts commit be553fb15591c6fc212ef3a07c1dd1cbc43d6866.

    Change-Id: Ibad1afa5860d611e0e0ea0ba5e7dc98ae8f07190
    Closes-Bug: #1625660
    (cherry picked from commit ee2c0a00db9d6006fb7c3a07ee252d4ca4d73eff)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/nova 14.0.0.0rc2

This issue was fixed in the openstack/nova 14.0.0.0rc2 release candidate.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/nova 15.0.0.0b1

This issue was fixed in the openstack/nova 15.0.0.0b1 development milestone.

Revision history for this message
Takashi Natsume (natsume-takashi) wrote :

@Sylvain,

I cannot reproduce the regression with the patch (https://review.openstack.org/#/c/257135/).
Please tell more information (your environment, configuration).

Revision history for this message
Hidekazu Nakamura (nakamura-h) wrote :

@Sylvian,

I can not reproduce the regression with the patch(https://review.openstack.org/#/c/257135/) too.

Evidence is here: http://paste.openstack.org/show/596199/

The main difference between yours and mine I found is that VOLUME ID of volume-attachments after volume-update.
Yours is not changed to volume id which is migrated.
Mine is changed properly.

Chould you tell tell more information?

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.