os-assisted-volume-snapshots passes unsanitised file path to the libvirt driver

Bug #1861893 reported by Matthew Booth on 2020-02-04
264
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Medium
Unassigned
OpenStack Security Advisory
Undecided
Unassigned

Bug Description

Nova’s os-assisted-volume-snapshots create and delete REST api calls both pass a structure (create_info and delete_info respectively) which contains a path to the location of a qcow2 snapshot. This path is passed to the libvirt driver without sanity checking. The path passed to create can be trivially altered using ‘..’ to reference any file on the host as the location to create a new volume snapshot.

I have not been able to exploit this for a few reasons. Most significantly, by default this api is restricted by policy to be admin only, and there are simpler avenues available to admin to destroy user data. Also, for create the destination path must already exist and be in qcow2 format, and for delete the destination path must (N.B. a libvirt expert should verify this assertion for me) be in the existing backing chain of the volume. I believe this is likely to make delete safe, with only create being potentially exploitable.

For create, the user can call guest.snapshot with the snapshot path containing any path on the host:

https://github.com/openstack/nova/blob/b42c54752f4c7d66bde313bdc1e8053d76b5588a/nova/virt/libvirt/driver.py#L2613-L2614

I have verified this on devstack with the following:

$ cat snapshot.json
{
    "snapshot": {
        "volume_id": "a55f2af8-dcb4-41fd-a0fd-bb4e59d3125e",
        "create_info": {
            "snapshot_id": "foo",
            "type": "qcow2",
            "new_file": "../tmp/evil.qcow2"
        }
    }
}

$ curl -i -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN" -d "$(cat snapshot.json)" http://192.168.123.11/compute/v2.1/os-assisted-volume-snapshots

In this case the referenced volume was iscsi and therefore had a path of /dev/sda. The resulting snapshot location was therefore /dev/../tmp/evil.qcow2. This generates an error if either that file does not exist, or is not in qcow2 format. However, if this is met nova updates the path:

    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none' io='native'/>
      <source file='/dev/../tmp/evil.qcow2'/>
      <backingStore type='block' index='1'>
        <format type='raw'/>
        <source dev='/dev/sda'/>
        <backingStore/>
      </backingStore>
      <target dev='vdb' bus='virtio'/>
      <serial>a55f2af8-dcb4-41fd-a0fd-bb4e59d3125e</serial>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </disk>

I believe the potential exploit is to be able to overwrite a qcow2 formatted file anywhere on the host with arbitrary data. The attacker would also have to know the location of this path. Block devices have trivially guessable paths, but these are not likely to contain a qcow2 header. In practise the most likely target would be local qcow2 disks, but this would also require knowing an instance uuid. And, without any other exploit, the user would need to be admin.

Although I was not able to exploit this myself, this is a poorly defined interface which seems generally vulnerable to either somebody more inventive than me, or future seemingly innocuous code changes or bugs. I suspect that more conservative users in particular would prefer not to expose this capability at all via a REST api, especially if they aren’t using it because they don’t use cinder remotefs, or they don’t use volume snapshots. I recommend:

* In the absence of any likely exploit, we open this bug immediately.
* We try to apply some practical sanitisation to the API-definable paths.
* We provide a mechanism to disable this API for users who don't need it.
* We attempt to replace it with a safer API in a future release.

Matthew Booth (mbooth-9) wrote :

I've added Sean and Lee for some additional eyes on it in case we can create an exploit.

Jeremy Stanley (fungi) wrote :

Since this report concerns a possible security risk, an incomplete security advisory task has been added while the core security reviewers for the affected project or projects confirm the bug and discuss the scope of any vulnerability along with potential solutions.

Changed in ossa:
status: New → Incomplete
description: updated
Jeremy Stanley (fungi) wrote :

If default policy restricts the method to global admins as indicated, then I feel like this is probably a class C1 report according to the VMT's taxonomy: https://security.openstack.org/vmt-process.html#incident-report-taxonomy

"Not considered a practical vulnerability (but some people might assign a CVE for it)"

If there's agreement from some Nova core security reviewers (subscribed), we can continue this discussion as a regular public bug.

Agreed with the C1 class proposed in the above comment.

Jeremy Stanley (fungi) wrote :

In keeping with recent OpenStack vulnerability management policy changes, no report should remain under private embargo for more than 90 days. Because this report predates the change in policy, the deadline for public disclosure is being set to 90 days from today. If the report is not resolved within the next 90 days, it will revert to our public workflow as of 2020-05-27. Please see http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012721.html for further details.

description: updated
tags: added: snapshot volumes
Jeremy Stanley (fungi) on 2020-05-19
description: updated
Jeremy Stanley (fungi) wrote :

The embargo for this report has expired and is now lifted, so it's acceptable to discuss further in public.

description: updated
information type: Private Security → Public Security
Changed in nova:
status: New → Confirmed
importance: Undecided → Medium
To post a comment you must log in.
This report contains Public Security information  Edit
Everyone can see this security related information.

Other bug subscribers