[SRU][RBD] Retyping of in-use boot volumes renders instances unusable (possible data corruption)

Bug #2019190 reported by Alexander Käb
40
This bug affects 6 people
Affects Status Importance Assigned to Milestone
Cinder
New
Critical
Eric Harney
Wallaby
New
Critical
Unassigned
OpenStack Compute (nova)
Invalid
Undecided
Unassigned
Ubuntu Cloud Archive
Status tracked in Caracal
Antelope
Fix Released
Undecided
Unassigned
Bobcat
Fix Released
Undecided
Unassigned
Caracal
Fix Released
Undecided
Unassigned
Yoga
Fix Released
Undecided
Unassigned
Zed
Fix Released
Undecided
Unassigned
cinder (Ubuntu)
Status tracked in Noble
Jammy
Fix Released
Undecided
Unassigned
Lunar
Won't Fix
Undecided
Unassigned
Mantic
Fix Released
Undecided
Unassigned
Noble
Fix Released
Undecided
Unassigned

Bug Description

[Impact]

See bug description for full details but short summary is that a patch landed in Wallaby release that introduced a regression whereby retyping an in-use volume leaves the attached volume in an inconsistent state with potential for data corruption. Result is that a vm does not receive updated connection_info from Cinder and will keep pointing to the old volume, even after reboot.

[Test Plan]

* Deploy Openstack with two Cinder RBD storage backends (different pools)
* Create two volume types
* Boot a vm from volume: openstack server create --wait --image jammy --flavor m1.small --key-name testkey --nic net-id=8c74f1ef-9231-46f4-a492-eccdb7943ecd testvm --boot-from-volume 10
* Retype the volume to type B: openstack volume set --type typeB --retype-policy on-demand <volume>
* Go to compute host running vm and check that the vm is now copying data to the new location e.g.

    <disk type='network' device='disk'>
      <driver name='qemu' type='raw' cache='none' discard='unmap'/>
      <auth username='cinder-ceph'>
        <secret type='ceph' uuid='01b65a79-22a3-4672-80e7-5a47b0e5581a'/>
      </auth>
      <source protocol='rbd' name='cinder-ceph/volume-b68be47d-f526-4f98-a77b-a903bf8b6c65' index='1'>
        <host name='10.5.2.236' port='6789'/>
      </source>
      <mirror type='network' job='copy'>
        <format type='raw'/>
        <source protocol='rbd' name='cinder-ceph-alt/volume-c6b55b4c-a540-4c39-ad1f-626c964ae3e1' index='2'>
          <host name='10.5.2.236' port='6789'/>
          <auth username='cinder-ceph-alt'>
            <secret type='ceph' uuid='e089e27e-3a2f-49d6-b6d9-770f52177eb1'/>
          </auth>
        </source>
        <backingStore/>
      </mirror>
      <target dev='vda' bus='virtio'/>
      <serial>b68be47d-f526-4f98-a77b-a903bf8b6c65</serial>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </disk>

which will eventually settle and change to:

    <disk type='network' device='disk'>
      <driver name='qemu' type='raw' cache='none' discard='unmap'/>
      <auth username='cinder-ceph-alt'>
        <secret type='ceph' uuid='e089e27e-3a2f-49d6-b6d9-770f52177eb1'/>
      </auth>
      <source protocol='rbd' name='cinder-ceph-alt/volume-c6b55b4c-a540-4c39-ad1f-626c964ae3e1' index='2'>
        <host name='10.5.2.236' port='6789'/>
      </source>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <serial>b68be47d-f526-4f98-a77b-a903bf8b6c65</serial>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </disk>

* And lastly a reboot of the vm should be successfull.

[Regression Potential]
Given that the current state is potential data corruption and the patch will fix this by successfully refreshing connection info I do not see a regression potential. It is in fact fixing a regression.

-------------------------------------------------------------------------

While trying out the volume retype feature in cinder, we noticed that after an instance is
rebooted it will not come back online and be stuck in an error state or if it comes back
online, its filesystem is corrupted.

## Observations

Say there are the two volume types `fast` (stored in ceph pool `volumes`) and `slow`
(stored in ceph pool `volumes.hdd`). Before the retyping we can see that the volume
for example is present in the `volumes.hdd` pool and has a watcher accessing the
volume.

```sh
[ceph: root@mon0 /]# rbd ls volumes.hdd
volume-81cfbafc-4fbb-41b0-abcb-8ec7359d0bf9

[ceph: root@mon0 /]# rbd status volumes.hdd/volume-81cfbafc-4fbb-41b0-abcb-8ec7359d0bf9
Watchers:
        watcher=[2001:XX:XX:XX::10ad]:0/3914407456 client.365192 cookie=140370268803456
```

Starting the retyping process using the migration policy `on-demand` for that volume either
via the horizon dashboard or the CLI causes the volume to be correctly transferred to the
`volumes` pool within the ceph cluster. However, the watcher does not get transferred, so
nobody is accessing the volume after it has been transferred.

```sh
[ceph: root@mon0 /]# rbd ls volumes
volume-81cfbafc-4fbb-41b0-abcb-8ec7359d0bf9

[ceph: root@mon0 /]# rbd status volumes/volume-81cfbafc-4fbb-41b0-abcb-8ec7359d0bf9
Watchers: none
```

Taking a look at the libvirt XML of the instance in question, one can see that the `rbd`
volume path does not change after the retyping is completed. Therefore, if the instance is
restarted nova will not be able to find its volume preventing an instance start.

#### Pre retype

```xml
[...]
<source protocol='rbd' name='volumes.hdd/volume-81cfbafc-4fbb-41b0-abcb-8ec7359d0bf9' index='1'>
    <host name='2001:XX:XX:XXX::a088' port='6789'/>
    <host name='2001:XX:XX:XXX::3af1' port='6789'/>
    <host name='2001:XX:XX:XXX::ce6f' port='6789'/>
</source>
[...]
```

#### Post retype (no change)

```xml
[...]
<source protocol='rbd' name='volumes.hdd/volume-81cfbafc-4fbb-41b0-abcb-8ec7359d0bf9' index='1'>
    <host name='2001:XX:XX:XXX::a088' port='6789'/>
    <host name='2001:XX:XX:XXX::3af1' port='6789'/>
    <host name='2001:XX:XX:XXX::ce6f' port='6789'/>
</source>
[...]
```

### Possible cause

While looking through the code that is responsible for the volume retype we found a function
`swap_volume` volume which by our understanding should be responsible for fixing the association
above. As we understand cinder should use an internal API path to let nova perform this action.
This doesn't seem to happen.

(`_swap_volume`: https://github.com/openstack/nova/blob/stable/wallaby/nova/compute/manager.py#L7218)

## Further observations

If one tries to regenerate the libvirt XML by e.g. live migrating the instance and rebooting the
instance after, the filesystem gets corrupted.

## Environmental Information and possibly related reports

We are running the latest version of TripleO Wallaby using the hardened (whole disk)
overcloud image for the nodes.

Cinder Volume Version: `openstack-cinder-18.2.2-0.20230219112414.f9941d2.el8.noarch`

### Possibly related

- https://bugzilla.redhat.com/show_bug.cgi?id=1293440

(might want to paste the above to a markdown file for better readability)

Revision history for this message
Sofia Enriquez (lsofia-enriquez) wrote :

Hello Alexander Käb,

To clarify:
- (double check) Are instances created from volumes, or are volumes attached to an instance? Can you share the command you are using to do this (steps).
- Is the data on the volumes encrypted?
- Have you encountered any errors in the cinder c-vol logs? Could you share the c-vol log?

Thanks!

tags: added: drivers live-migration nova rbd retype
Changed in cinder:
importance: Undecided → Medium
summary: - Retyping of in-use boot volumes renders instances unusable (possible
- data corruption)
+ [RBD] Retyping of in-use boot volumes renders instances unusable
+ (possible data corruption)
Revision history for this message
Sofia Enriquez (lsofia-enriquez) wrote : Re: [RBD] Retyping of in-use boot volumes renders instances unusable (possible data corruption)

Adding Nova because the report indicates that the volume is migrated to a different ceph pool but the instance points to the old location.

Revision history for this message
Alexander Käb (alexander-kaeb) wrote :

Hi Sofia,

all the tested instances were created from an image, with the option `Create New Volume`
checked, when creating an instance via the dashboard. The steps performed to retype the
volumes are as follows:

- Either via the Dashboard or the CLI (`cinder retype --migration-policy on-demand [...]`) retype the volume from either slow to fast or fast to slow
- rebooting the instance using i.e. a soft reboot

Just these two steps are enough to bring the instance to an error state, as libvirt will
try to load the instance's volume from the pre-retype location which will fail.
Sometimes live-migrating the instance after the retype can lead to the instance working again, but
if the instance performs some IO-Operations, there is a great chance, that the FS is broken
after an reboot:

```
[[0;32m OK [0m] Stopped target [0;1;39mBasic System[0m.
[[0;32m OK [0m] Reached target [0;1;39mInitrd File Systems[0m.
[[0;32m OK [0m] Stopped target [0;1;39mSystem Initialization[0m.
[[0;32m OK [0m] Stopped [0;1;39mdracut pre-mount hook[0m.
[[0;32m OK [0m] Stopped [0;1;39mdracut initqueue hook[0m.
[[0;32m OK [0m] Stopped [0;1;39mdracut pre-trigger hook[0m.
[[0;32m OK [0m] Stopped [0;1;39mdracut pre-udev hook[0m.
[[0;32m OK [0m] Stopped [0;1;39mdracut cmdline hook[0m.
[[0;32m OK [0m] Started [0;1;39mEmergency Shell[0m.
[[0;32m OK [0m] Reached target [0;1;39mEmergency Mode[0m.

Generating "/run/initramfs/rdsosreport.txt"

Entering emergency mode. Exit the shell to continue.
Type "journalctl" to view system logs.
You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot
after mounting them and attach it to a bug report.

[?2004h:/#
```

Attached you will find the cinder-volume log and the nova-compute log during an earlier
test. (debug log enabled)

Revision history for this message
Alexander Käb (alexander-kaeb) wrote :

nova log

Revision history for this message
melanie witt (melwitt) wrote :

Generally, nova gets the volume locations from cinder as a field called 'connection_info' which belongs to a volume attachment.

The way retype usually works is cinder creates a new empty volume with the destination volume type and then calls the nova swap_volume API [1] to swap the volume from the original source volume to the new destination volume. Nova will call the cinder API to create a new attachment for the destination volume. Then, nova gathers the nova-compute host connector and calls the cinder API to update the attachment with the host connector. Cinder API returns the new connection_info from this call. Nova calls down into the libvirt driver to connect the new volume and copy the volume data from the old volume to the new volume, using the new connection_info for the destination libvirt XML. Finally, Nova disconnects the old volume.

However from what I can tell reading the code, in the case of the RBD driver on the cinder side, I don't see that nova is called at all as part of the retyping process, so it doesn't know about the new volume location when it goes to generate the guest XML.

I found mention about this issue on the ceph-users mailing list recently as well:

https://<email address hidden>/thread/TJO6YBJFHCY743UPQDY4D4PENZDQFAHH

which pointed to these posts on the openstack-discuss mailing list:

https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034160.html

https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034165.html

According to the second post, the retype of attached RBD volumes was working in Victoria as long as the [nova] section of the cinder.conf was configured and then it stopped working in Wallaby. The second post noted https://bugs.launchpad.net/cinder/+bug/1886543 as the only change around retype for Wallaby, so is it possible that is related?

I think this bug is Critical given it's a regression and has potential for data loss. Please let me know if I’ve got anything wrong here and/or if anything is needed on the nova side.

[1] https://github.com/openstack/cinder/blob/5728d3899f13140203d44259ca8dfb7ae132e192/cinder/volume/manager.py#L2429

Changed in cinder:
importance: Medium → Critical
Eric Harney (eharney)
Changed in cinder:
assignee: nobody → Eric Harney (eharney)
Revision history for this message
melanie witt (melwitt) wrote :

I spent some time on this and I was able to reproduce the bug.

I am not sure exactly how RBD assisted volume migration is supposed to work but there is no call to Nova happening, so Nova doesn't know anything has changed. That point kind of doesn't matter though because AFAICT there is no existing API call that could be used to tell Nova, "point at the new volume location without copying any volume data to it". The only API we have at present is the swap volume API and there's no way to tell it not to copy volume data.

The other issue I see is that the volume attachment connection_info on the Cinder side does not itself get updated with the new volume location. So even if Nova was able to pull new connection_info from Cinder [1], it would still fail to boot because the new volume location isn't there.

Based on the fact that we don't have an API to tell Nova about the new volume location without copying data, I'm not sure what we can do to immediately fix this other than revert the patch that changed the mechanism for RBD volume retype.

For a future fix, I "think" it would not be difficult to add a "do not copy" type of flag to the PUT /servers/{server_id}/os-volume_attachments/{volume_id} API in Nova [2]. Then after the retype Cinder could call Nova to say "this volume moved but don't copy any data there".

Here are the steps I used to reproduce the issue:

https://paste.openstack.org/show/bNpzkjbeXrmTCwNHfDGs

No volumes are encrypted and the [nova] section is configured in cinder.conf.

[1] https://docs.openstack.org/nova/latest/cli/nova-manage.html#volume-attachment-refresh
[2] https://docs.openstack.org/api-ref/compute/?expanded=update-a-volume-attachment-detail#update-a-volume-attachment

Revision history for this message
melanie witt (melwitt) wrote :
Download full text (9.4 KiB)

I uploaded a DNM tempest patch to run modified TestVolumeMigrateRetypeAttached tests in tempest/scenario/test_volume_migrate_attached.py with the master, stable/wallaby, and stable/victoria branches [1]:

  https://review.opendev.org/c/openstack/tempest/+/890360

The tests in ^ are modified to add a hard reboot of the instance at the end.

The migrate volume test passes in all branches while the retype volume test fails in master and stable/wallaby but passes in stable/victoria [2].

The unmodified tests will pass because they aren't hard rebooting the server to cause regeneration of guest XML.

In the test logs on the DNM patch [2], I think I might have also found why migrate works while retype fails.

The RBD driver [3] makes a decision about which path to take based on the volume status. In the test logs, it's showing that for migrate, the volume is 'in-use' and the RBD driver (correctly) considers this case to be a move across different pools and falls back to a generic migrate which calls the Nova swap volume API. For retype however, the volume status is 'retyping' so it doesn't refuse the assisted migration and it goes ahead.

Excerpts from the c-vol log:

migrate volume:

Aug 03 22:24:16.833416 np0034853654 cinder-volume[116332]: DEBUG cinder.volume.manager [None req-1c151856-e8fb-41e3-ad42-36810f4fcec8 tempest-TestVolumeMigrateRetypeAttached-2102186043 None] Issue driver.migrate_volume. {{(pid=116332) migrate_volume /opt/stack/cinder/cinder/volume/manager.py:2609}}
Aug 03 22:24:16.834270 np0034853654 cinder-volume[116332]: DEBUG cinder.volume.drivers.rbd [None req-1c151856-e8fb-41e3-ad42-36810f4fcec8 tempest-TestVolumeMigrateRetypeAttached-2102186043 None] Attempting RBD assisted volume migration. volume: 9a27b9cd-e6e5-4f29-a127-a030e94c5356, host: {'host': 'np0034853654@ceph2#ceph2', 'cluster_name': None, 'capabilities': {'vendor_name': 'Open Source', 'driver_version': '1.2.0', 'storage_protocol': 'ceph', 'total_capacity_gb': 24.56, 'free_capacity_gb': 24.56, 'reserved_percentage': 0, 'multiattach': True, 'thin_provisioning_support': True, 'max_over_subscription_ratio': '20.0', 'location_info': 'ceph:/etc/ceph/ceph.conf:018eb22d-04d2-464f-8294-675d033013df:cinder:othervolumes', 'backend_state': 'up', 'volume_backend_name': 'ceph2', 'replication_enabled': False, 'allocated_capacity_gb': 0, 'filter_function': None, 'goodness_function': None, 'timestamp': '2023-08-03T22:23:59.050934'}}, status=in-use. {{(pid=116332) migrate_volume /opt/stack/cinder/cinder/volume/drivers/rbd.py:1924}}
Aug 03 22:24:16.834270 np0034853654 cinder-volume[116332]: DEBUG os_brick.initiator.linuxrbd [None req-1c151856-e8fb-41e3-ad42-36810f4fcec8 tempest-TestVolumeMigrateRetypeAttached-2102186043 None] opening connection to ceph cluster (timeout=-1). {{(pid=116332) connect /opt/stack/os-brick/os_brick/initiator/linuxrbd.py:70}}
Aug 03 22:24:16.861112 np0034853654 cinder-volume[116332]: DEBUG cinder.volume.drivers.rbd [None req-1c151856-e8fb-41e3-ad42-36810f4fcec8 tempest-TestVolumeMigrateRetypeAttached-2102186043 None] connecting to cinder@ceph (conf=/etc/ceph/ceph.conf, timeout=-1). {{(pid=116332) _do_conn /opt/stack/cinder/cinder/volume/drivers/rbd.py:480}}
Au...

Read more...

Revision history for this message
Luigi Toscano (ltoscano) wrote :

Can the tempest patch be resurrect and pushed as a proper patch? I didn't notice this comment (sorry) and ended up writing a simpler version, which I'm going to abandon: https://review.opendev.org/c/openstack/tempest/+/893863

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to cinder (master)

Related fix proposed to branch: master
Review: https://review.opendev.org/c/openstack/cinder/+/896172

Revision history for this message
melanie witt (melwitt) wrote : Re: [RBD] Retyping of in-use boot volumes renders instances unusable (possible data corruption)

Thank you Luigi for pointing that out!

I have pushed a proper patch and proposed two more patches as well to enable us to configure Ceph in devstack to use a separate Ceph pool per backend:

* tempest patch to test regression: https://review.opendev.org/c/openstack/tempest/+/890360

* devstack-plugin-ceph patch to enable config of separate Ceph pools: https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/895533

* cinder patch to add a cinder-tempest-ceph-multibackend job: https://review.opendev.org/c/openstack/cinder/+/896172

Revision history for this message
Yusuf Güngör (yusuf2) wrote :

Hi everyone, on our test we have a workaround. After volume retype, cold migrating the instance updates the pool name on guest xml and creates a new volume attachment which contains the new pool name on the attachment properties.

Revision history for this message
melanie witt (melwitt) wrote :

In an effort to clean up stale bugs, I'm marking this as Invalid for Nova because the issue is in Cinder.

Changed in nova:
status: New → Invalid
Revision history for this message
Nishant Dash (dash3) wrote :

Hello,

I am hitting this issue on a production cluster (jammy-yoga).

At the moment I have 2 situations with workarounds other than cold migrating:
- Instances that been retyped but not rebooted so are active.
For these, I tried to shelve and unshelve the VM and that was enough to update the block-device-mapping entry in the nova-db for the instance. (Cold migration works too like described in comment #11)

- Instances that have been retyped and sent a reboot request and are in ERROR.
For these, setting their state to ACTIVE and then doing the shelve-unshelve works.

These are the only workarounds I have so far.

Revision history for this message
melanie witt (melwitt) wrote :

Update:

The patch that introduced the regression has been reverted on:

* master (Caracal) https://review.opendev.org/c/openstack/cinder/+/899157

* stable/2023.2 (Bobcat) https://review.opendev.org/c/openstack/cinder/+/900671

and has a revert approved on:

* stable/2023.1 (Antelope) https://review.opendev.org/c/openstack/cinder/+/900819

Revision history for this message
Edward Hope-Morley (hopem) wrote :

Since we are using Yoga and hitting this issue I had a go at reverting the patch there too and can confirm that it does resolve the problem.

Revision history for this message
Edward Hope-Morley (hopem) wrote :

I have proposed the Z and Y backports of the revert. Hoping we can also get those landed asap.

Revision history for this message
Edward Hope-Morley (hopem) wrote :
summary: - [RBD] Retyping of in-use boot volumes renders instances unusable
+ [SRU][RBD] Retyping of in-use boot volumes renders instances unusable
(possible data corruption)
description: updated
Revision history for this message
Edward Hope-Morley (hopem) wrote :
Revision history for this message
Ubuntu Foundations Team Bug Bot (crichton) wrote :

The attachment "lp2019190-mantic.debdiff" seems to be a debdiff. The ubuntu-sponsors team has been subscribed to the bug report so that they can review and hopefully sponsor the debdiff. If the attachment isn't a patch, please remove the "patch" flag from the attachment, remove the "patch" tag, and if you are member of the ~ubuntu-sponsors, unsubscribe the team.

[This is an automated message performed by a Launchpad user owned by ~brian-murray, for any issue please contact him.]

tags: added: patch
Revision history for this message
James Page (james-page) wrote :

Included in most recent snapshots for Caracal

Changed in cinder (Ubuntu Noble):
status: New → Fix Released
James Page (james-page)
Changed in cinder (Ubuntu Mantic):
status: New → In Progress
Revision history for this message
Edward Hope-Morley (hopem) wrote :
Revision history for this message
Edward Hope-Morley (hopem) wrote :
Revision history for this message
Edward Hope-Morley (hopem) wrote :
James Page (james-page)
Changed in cinder (Ubuntu Lunar):
status: New → Won't Fix
Revision history for this message
Simon Chopin (schopin) wrote :

I'm unsubscribing ubuntu-sponsors as this doesn't really look like your standard run-of-the-mill debdiff contribution :)

If I'm mistaken, please re-subscribe ubuntu-sponsors and add a message pointing us towards the relevant contribution.

Revision history for this message
James Page (james-page) wrote :

@schopin I've got the sponsoring for this in my TODO list.

Revision history for this message
James Page (james-page) wrote :

I've sponsored Ed's diffs across impacted Ubuntu and Cloud Archive series.

Revision history for this message
Timo Aaltonen (tjaalton) wrote : Please test proposed package

Hello Alexander, or anyone else affected,

Accepted cinder into mantic-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/cinder/2:23.0.0-0ubuntu1.1 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, what testing has been performed on the package and change the tag from verification-needed-mantic to verification-done-mantic. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed-mantic. In either case, without details of your testing we will not be able to proceed.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance for helping!

N.B. The updated package will be released to -updates after the bug(s) fixed by this package have been verified and the package has been in -proposed for a minimum of 7 days.

Changed in cinder (Ubuntu Mantic):
status: In Progress → Fix Committed
tags: added: verification-needed verification-needed-mantic
Revision history for this message
Timo Aaltonen (tjaalton) wrote :

Hello Alexander, or anyone else affected,

Accepted cinder into jammy-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/cinder/2:20.3.1-0ubuntu1.1 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, what testing has been performed on the package and change the tag from verification-needed-jammy to verification-done-jammy. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed-jammy. In either case, without details of your testing we will not be able to proceed.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance for helping!

N.B. The updated package will be released to -updates after the bug(s) fixed by this package have been verified and the package has been in -proposed for a minimum of 7 days.

Changed in cinder (Ubuntu Jammy):
status: New → Fix Committed
tags: added: verification-needed-jammy
Revision history for this message
James Page (james-page) wrote :

Hello Alexander, or anyone else affected,

Accepted cinder into zed-proposed. The package will build now and be available in the Ubuntu Cloud Archive in a few hours, and then in the -proposed repository.

Please help us by testing this new package. To enable the -proposed repository:

  sudo add-apt-repository cloud-archive:zed-proposed
  sudo apt-get update

Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-zed-needed to verification-zed-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-zed-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

tags: added: verification-zed-needed
Revision history for this message
James Page (james-page) wrote :

Hello Alexander, or anyone else affected,

Accepted cinder into antelope-proposed. The package will build now and be available in the Ubuntu Cloud Archive in a few hours, and then in the -proposed repository.

Please help us by testing this new package. To enable the -proposed repository:

  sudo add-apt-repository cloud-archive:antelope-proposed
  sudo apt-get update

Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-antelope-needed to verification-antelope-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-antelope-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

tags: added: verification-antelope-needed
Revision history for this message
James Page (james-page) wrote :

Hello Alexander, or anyone else affected,

Accepted cinder into zed-proposed. The package will build now and be available in the Ubuntu Cloud Archive in a few hours, and then in the -proposed repository.

Please help us by testing this new package. To enable the -proposed repository:

  sudo add-apt-repository cloud-archive:zed-proposed
  sudo apt-get update

Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-zed-needed to verification-zed-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-zed-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

Revision history for this message
James Page (james-page) wrote :

Hello Alexander, or anyone else affected,

Accepted cinder into antelope-proposed. The package will build now and be available in the Ubuntu Cloud Archive in a few hours, and then in the -proposed repository.

Please help us by testing this new package. To enable the -proposed repository:

  sudo add-apt-repository cloud-archive:antelope-proposed
  sudo apt-get update

Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-antelope-needed to verification-antelope-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-antelope-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

Revision history for this message
Edward Hope-Morley (hopem) wrote :

Verified mantic-proposed using [Test Case] with the following output:

# apt-cache policy cinder-common
cinder-common:
  Installed: 2:23.0.0-0ubuntu1.1
  Candidate: 2:23.0.0-0ubuntu1.1
  Version table:
 *** 2:23.0.0-0ubuntu1.1 100
        100 http://nova.clouds.archive.ubuntu.com/ubuntu mantic-proposed/main amd64 Packages
        100 /var/lib/dpkg/status
     2:23.0.0-0ubuntu1 500
        500 http://nova.clouds.archive.ubuntu.com/ubuntu mantic/main amd64 Packages

# virsh dumpxml instance-00000001| grep cinder-ceph
      <auth username='cinder-ceph-alt'>
      <source protocol='rbd' name='cinder-ceph-alt/volume-9193814c-247b-42ae-9e47-bdda4c96aca7' index='2'>

tags: added: verification-done-mantic
removed: verification-needed-mantic
Revision history for this message
James Page (james-page) wrote :

Hello Alexander, or anyone else affected,

Accepted cinder into bobcat-proposed. The package will build now and be available in the Ubuntu Cloud Archive in a few hours, and then in the -proposed repository.

Please help us by testing this new package. To enable the -proposed repository:

  sudo add-apt-repository cloud-archive:bobcat-proposed
  sudo apt-get update

Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-bobcat-needed to verification-bobcat-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-bobcat-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

tags: added: verification-bobcat-needed
Revision history for this message
James Page (james-page) wrote :

Hello Alexander, or anyone else affected,

Accepted cinder into yoga-proposed. The package will build now and be available in the Ubuntu Cloud Archive in a few hours, and then in the -proposed repository.

Please help us by testing this new package. To enable the -proposed repository:

  sudo add-apt-repository cloud-archive:yoga-proposed
  sudo apt-get update

Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-yoga-needed to verification-yoga-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-yoga-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

tags: added: verification-yoga-needed
Revision history for this message
Edward Hope-Morley (hopem) wrote :

Verified jammy-bobcat/proposed using [Test Case] with the following output:

# apt-cache policy cinder-common
cinder-common:
  Installed: 2:23.0.0-0ubuntu1.1~cloud0
  Candidate: 2:23.0.0-0ubuntu1.1~cloud0
  Version table:
 *** 2:23.0.0-0ubuntu1.1~cloud0 500
        500 http://ubuntu-cloud.archive.canonical.com/ubuntu jammy-proposed/bobcat/main amd64 Packages
        100 /var/lib/dpkg/status
     2:20.3.1-0ubuntu1 500
        500 http://nova.clouds.archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages
     2:20.2.0-0ubuntu1.1 500
        500 http://security.ubuntu.com/ubuntu jammy-security/main amd64 Packages
     2:20.0.0-0ubuntu1 500
        500 http://nova.clouds.archive.ubuntu.com/ubuntu jammy/main amd64 Packages

root@juju-e47fb3-sf00376872-ps6-12:/home/ubuntu# virsh dumpxml instance-00000002| grep cinder-ceph
      <auth username='cinder-ceph-alt'>
      <source protocol='rbd' name='cinder-ceph-alt/volume-6bbd362e-3b28-4b66-b1e6-14813a9ac2aa' index='2'>

tags: added: verification-bobcat-done
removed: verification-bobcat-needed
Revision history for this message
Edward Hope-Morley (hopem) wrote :

antelope-proposed verified using [Test Case] with the following output:

# apt-cache policy cinder-common
cinder-common:
  Installed: 2:22.1.1-0ubuntu1.1~cloud0
  Candidate: 2:22.1.1-0ubuntu1.1~cloud0
  Version table:
 *** 2:22.1.1-0ubuntu1.1~cloud0 500
        500 http://ubuntu-cloud.archive.canonical.com/ubuntu jammy-proposed/antelope/main amd64 Packages
        100 /var/lib/dpkg/status
     2:20.3.1-0ubuntu1 500
        500 http://nova.clouds.archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages
     2:20.2.0-0ubuntu1.1 500
        500 http://security.ubuntu.com/ubuntu jammy-security/main amd64 Packages
     2:20.0.0-0ubuntu1 500
        500 http://nova.clouds.archive.ubuntu.com/ubuntu jammy/main amd64 Packages

# virsh dumpxml instance-00000001 | grep rbd
      <source protocol='rbd' name='cinder-ceph/volume-d843d497-8514-4a4f-897f-eefc74902e3c' index='1'>

# virsh dumpxml instance-00000001 | grep rbd
      <source protocol='rbd' name='cinder-ceph-alt/volume-b5b806df-51ba-447c-96ac-0457789535d9' index='2'>

tags: added: verification-antelope-done
removed: verification-antelope-needed
Revision history for this message
Andreas Hasenack (ahasenack) wrote :

I can release mantic, but this is missing the jammy verification please.

Once released, should we also reopen https://bugs.launchpad.net/cinder/+bug/1886543 ?

Revision history for this message
Andreas Hasenack (ahasenack) wrote : Update Released

The verification of the Stable Release Update for cinder has completed successfully and the package is now being released to -updates. Subsequently, the Ubuntu Stable Release Updates Team is being unsubscribed and will not receive messages about this bug report. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.

Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package cinder - 2:23.0.0-0ubuntu1.1

---------------
cinder (2:23.0.0-0ubuntu1.1) mantic; urgency=medium

  [ Corey Bryant ]
  * d/gbp.conf: Create stable/2023.2 branch.
  * d/gbp.conf, .launchpad.yaml: Sync from cloud-archive-tools for
    bobcat.

  [ Edward Hope-Morley ]
  * revert driver assister volume retype (LP: #2019190)
    - d/p/0001-Revert-Driver-assisted-migration-on-retype-when-it-s.patch

 -- James Page <email address hidden> Thu, 25 Jan 2024 16:33:13 +0000

Changed in cinder (Ubuntu Mantic):
status: Fix Committed → Fix Released
Revision history for this message
Edward Hope-Morley (hopem) wrote :

@ahasenack correct I have not yet done Jammy (or Yoga UCA) I am working though them in decending order and will get to that next (have just done Xena UCA).

Revision history for this message
Edward Hope-Morley (hopem) wrote :

xena-proposed verified using [Test Case] and the following output:

# apt-cache policy cinder-common
cinder-common:
  Installed: 2:21.3.1-0ubuntu1.1~cloud0
  Candidate: 2:21.3.1-0ubuntu1.1~cloud0
  Version table:
 *** 2:21.3.1-0ubuntu1.1~cloud0 500
        500 http://ubuntu-cloud.archive.canonical.com/ubuntu jammy-proposed/zed/main amd64 Packages
        100 /var/lib/dpkg/status
     2:20.3.1-0ubuntu1 500
        500 http://nova.clouds.archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages
     2:20.2.0-0ubuntu1.1 500
        500 http://security.ubuntu.com/ubuntu jammy-security/main amd64 Packages
     2:20.0.0-0ubuntu1 500
        500 http://nova.clouds.archive.ubuntu.com/ubuntu jammy/main amd64 Packages

# virsh dumpxml instance-00000001 | grep rbd
      <source protocol='rbd' name='cinder-ceph/volume-5e002160-d3d0-4769-8ae9-a630db072f1e' index='1'>

# virsh dumpxml instance-00000001 | grep rbd
      <source protocol='rbd' name='cinder-ceph-alt/volume-e5eed4a1-fb55-4027-833e-62a237a3410e' index='2'>

tags: added: verification-zed-done
removed: verification-zed-needed
Revision history for this message
Edward Hope-Morley (hopem) wrote :

jammy-proposed verified using [Test Case] with the following output:

# apt-cache policy cinder-common
cinder-common:
  Installed: 2:20.3.1-0ubuntu1.1
  Candidate: 2:20.3.1-0ubuntu1.1
  Version table:
 *** 2:20.3.1-0ubuntu1.1 500
        500 http://archive.ubuntu.com/ubuntu jammy-proposed/main amd64 Packages
        100 /var/lib/dpkg/status
     2:20.3.1-0ubuntu1 500
        500 http://nova.clouds.archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages
     2:20.2.0-0ubuntu1.1 500
        500 http://security.ubuntu.com/ubuntu jammy-security/main amd64 Packages
     2:20.0.0-0ubuntu1 500
        500 http://nova.clouds.archive.ubuntu.com/ubuntu jammy/main amd64 Packages

root@juju-9e6d1e-sf00376872-ps6-12:/home/ubuntu# virsh dumpxml instance-00000001 | grep rbd
      <source protocol='rbd' name='cinder-ceph/volume-fcf5f6d2-7b47-4b88-a187-27d737c2b356' index='1'>

root@juju-9e6d1e-sf00376872-ps6-12:/home/ubuntu# virsh dumpxml instance-00000001 | grep rbd
      <source protocol='rbd' name='cinder-ceph-alt/volume-05ea5647-b37b-461b-8f25-9a98a7759af1' index='2'>

tags: added: verification-done-jammy
removed: verification-needed-jammy
Revision history for this message
Edward Hope-Morley (hopem) wrote :

Verified focal-yoga using [Test Case] and the following output:

# apt-cache policy cinder-common
cinder-common:
  Installed: 2:20.3.1-0ubuntu1.1~cloud0
  Candidate: 2:20.3.1-0ubuntu1.1~cloud0
  Version table:
 *** 2:20.3.1-0ubuntu1.1~cloud0 500
        500 http://ubuntu-cloud.archive.canonical.com/ubuntu focal-proposed/yoga/main amd64 Packages
        100 /var/lib/dpkg/status
     2:16.4.2-0ubuntu2.4 500
        500 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages
        500 http://security.ubuntu.com/ubuntu focal-security/main amd64 Packages
     2:16.0.0~b3~git2020041012.eb915e2db-0ubuntu1 500
        500 http://nova.clouds.archive.ubuntu.com/ubuntu focal/main amd64 Packages

# virsh dumpxml instance-00000001 | grep rbd
      <source protocol='rbd' name='cinder-ceph/volume-bee8036a-3f77-4439-a15c-bf2c575a48ce' index='1'>

# virsh dumpxml instance-00000001| grep rbd
      <source protocol='rbd' name='cinder-ceph-alt/volume-31156d54-6769-42ba-bb9a-11dbb2a1dfcf' index='1'>

tags: added: verification-done verification-yoga-done
removed: verification-needed verification-yoga-needed
Revision history for this message
James Page (james-page) wrote :

The verification of the Stable Release Update for cinder has completed successfully and the package has now been released to -updates. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.

Revision history for this message
James Page (james-page) wrote :

This bug was fixed in the package cinder - 2:23.0.0-0ubuntu1.1~cloud0
---------------

 cinder (2:23.0.0-0ubuntu1.1~cloud0) jammy-bobcat; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 cinder (2:23.0.0-0ubuntu1.1) mantic; urgency=medium
 .
   [ Corey Bryant ]
   * d/gbp.conf: Create stable/2023.2 branch.
   * d/gbp.conf, .launchpad.yaml: Sync from cloud-archive-tools for
     bobcat.
 .
   [ Edward Hope-Morley ]
   * revert driver assister volume retype (LP: #2019190)
     - d/p/0001-Revert-Driver-assisted-migration-on-retype-when-it-s.patch

Revision history for this message
James Page (james-page) wrote :

The verification of the Stable Release Update for cinder has completed successfully and the package has now been released to -updates. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.

Revision history for this message
James Page (james-page) wrote :

This bug was fixed in the package cinder - 2:22.1.1-0ubuntu1.1~cloud0
---------------

 cinder (2:22.1.1-0ubuntu1.1~cloud0) jammy-antelope; urgency=medium
 .
   * revert driver assister volume retype (LP: #2019190)
     - d/p/0001-Revert-Driver-assisted-migration-on-retype-when-it-s.patch

Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package cinder - 2:20.3.1-0ubuntu1.1

---------------
cinder (2:20.3.1-0ubuntu1.1) jammy; urgency=medium

  * Revert driver assisted volume retype (LP: #2019190):
    - d/p/0001-Revert-Driver-assisted-migration-on-retype-when-it-s.patch

 -- Edward Hope-Morley <email address hidden> Fri, 26 Jan 2024 12:26:48 +0000

Changed in cinder (Ubuntu Jammy):
status: Fix Committed → Fix Released
Revision history for this message
James Page (james-page) wrote :

The verification of the Stable Release Update for cinder has completed successfully and the package has now been released to -updates. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.

Revision history for this message
James Page (james-page) wrote :

This bug was fixed in the package cinder - 2:21.3.1-0ubuntu1.1~cloud0
---------------

 cinder (2:21.3.1-0ubuntu1.1~cloud0) jammy-zed; urgency=medium
 .
   * revert driver assister volume retype (LP: #2019190)
     - d/p/0001-Revert-Driver-assisted-migration-on-retype-when-it-s.patch

Revision history for this message
James Page (james-page) wrote :

The verification of the Stable Release Update for cinder has completed successfully and the package has now been released to -updates. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.

Revision history for this message
James Page (james-page) wrote :

This bug was fixed in the package cinder - 2:20.3.1-0ubuntu1.1~cloud0
---------------

 cinder (2:20.3.1-0ubuntu1.1~cloud0) focal-yoga; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 cinder (2:20.3.1-0ubuntu1.1) jammy; urgency=medium
 .
   * Revert driver assisted volume retype (LP: #2019190):
     - d/p/0001-Revert-Driver-assisted-migration-on-retype-when-it-s.patch

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.