Rescue does not provide access to ephemeral disk

Bug #1223396 reported by Phil Day on 2013-09-10
This bug affects 3 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)

Bug Description

Currently the rescue operation maps the "old" root disc to /dev/vdb and leaves the ephemeral disc (which was on vdb) unmapped.

There are cases where a root disc is corrupted beyond recovery but the user want to save their data before deleting the instance - so the ephemeral disc should be mapped to /dev/vdc.

Changed in nova:
status: New → Confirmed
importance: Undecided → Wishlist
tags: added: compute libvirt
jichenjc (jichenjc) on 2014-01-28
Changed in nova:
assignee: nobody → jichencom (jichenjc)
jichenjc (jichenjc) on 2014-02-12
Changed in nova:
assignee: jichencom (jichenjc) → nobody
Changed in nova:
assignee: nobody → David McNally (dave-mcnally)

Fix proposed to branch: master

Changed in nova:
status: Confirmed → In Progress
summary: - Rescue should provide access ephemeral disk
+ libvirt: Rescue should provide access ephemeral disk
tags: added: xenserver
summary: - libvirt: Rescue should provide access ephemeral disk
+ Rescue should provide access ephemeral disk

Changing subject to make it clear that the bug is broader than just ephemeral disks - swap disk and block device mappings are also missing in rescue mode. IOW rescue mode should have an identical set of disks to non-rescue mode, but with additional of the rescue image.

summary: - Rescue should provide access ephemeral disk
+ Rescue does not provide access to all disks
Phil Day (philip-day) on 2014-04-08
summary: - Rescue does not provide access to all disks
+ Rescue does not provide access to ephemeral disk
Phil Day (philip-day) wrote :

Changing the subject bas since as the originator of the bug I don't agree with the change

The context for raising it was specifically that if a VM can't be brought back into a state where it can be rebooted, by making the ephemeral disk available in rescue the user can still recover thier data in the ephemeral disk. Those data recovery concerns don't extend to the swap device or any attached volumes - so I don't want to overly complicate a fix to the basic data loss issue by including them in the scope of any fix.

If there is a need of recovering other discs that should be addresses seperatly

John Garbutt (johngarbutt) wrote :

Progress has stopped on this it seems, returning to unassigned and triaged, please update if that is not true.

Changed in nova:
assignee: David McNally (dave-mcnally) → nobody
status: In Progress → Triaged
Changed in nova:
assignee: nobody → Johannes Erdfelt (johannes.erdfelt)
status: Triaged → In Progress

Submitter: Jenkins
Branch: master

commit 1e2c92f3f20b2742887edde11aaf2a062566c16f
Author: Johannes Erdfelt <email address hidden>
Date: Tue Mar 18 09:16:29 2014 -0700

    xenapi: Attach original local disks during rescue

    When rescuing an instance, a new VM is created and only the original
    root disk is re-attached. Often when a user is rescuing a VM, they expect
    to be able to access all of their original disks so they can potentially
    salvage data.

    This changes the xenapi driver to attach the original local disks
    during rescue so the user can rescue all of their data.


    Implements: blueprint rescue-attach-all-disks
    Change-Id: Iba5cc85cd03d0a60f1858cf16aa31397e163df50
    Partial-bug: 1223396

Sean Dague (sdague) wrote : is a partial libvirt fix, it was objected to by danpb because it only fixed the bug, and not allowing access to swap and volumes. The bug explains why that's not needed. So I'd consider this triaged, and the old patch probably just needs to be revived.

Changed in nova:
status: In Progress → Triaged
assignee: Johannes Erdfelt (johannes.erdfelt) → nobody
importance: Wishlist → Medium
Sean Dague (sdague) on 2015-03-30
Changed in nova:
status: Triaged → Confirmed

This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: <RELEASE_NAME>"
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).

Changed in nova:
importance: Medium → Undecided
status: Confirmed → Expired
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers