RemoteFsConnector doesn't unmount volume on disconnect

Bug #1559342 reported by Alex Kolbasov
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Cinder
New
Medium
Unassigned

Bug Description

I am developing a Cinder driver and noticed that every time I create a volume from an image I get an NFS mount of the backing volume. This causes troubles because my driver manages all mounts and it doesn't know about this one, so when the volume is deleted we get a dangling mount.

After some digging around I found a problem - it is in os_brick.initiator.connector.py in RemoteFsConnector(InitiatorConnector)

Here is the problem:

   def connect_volume(self, connection_properties):
        """Ensure that the filesystem containing the volume is mounted.

        :param connection_properties: The dictionary that describes all
                                      of the target volume attributes.
             connection_properties must include:
             export - remote filesystem device (e.g. '172.18.194.100:/var/nfs')
             name - file name within the filesystem
        :type connection_properties: dict
        :returns: dict

        connection_properties may optionally include:
        options - options to pass to mount
        """
        path = self._get_volume_path(connection_properties)
        return {'path': path}

    def disconnect_volume(self, connection_properties, device_info):
        """No need to do anything to disconnect a volume in a filesystem.

        :param connection_properties: The dictionary that describes all
                                      of the target volume attributes.
        :type connection_properties: dict
        :param device_info: historical difference, but same as connection_props
        :type device_info: dict
        """

So connect_volume mounts the volume but disconnect_volume doesn't unmount it so it is left in the mounted state forever which is wrong for any driver that maintains its own mounts since the driver doesn't know anything about this mount.

IMO disconnect_volume should actually unmount it.

tags: added: os-brick remotefs
Changed in cinder:
importance: Undecided → Medium
Revision history for this message
Eric Harney (eharney) wrote :

The main reason the NFS exports are left mounted is because it is rather difficult to determine when it is safe to unmount them in a non-racy way.

Nova has a solution to this by attempting an unmount and letting it silently fail if the mount is in-use, but I think doing that in Cinder may introduce more issues than it fixes due to there being a race between the check and another operation attempting to use the mount point.

Revision history for this message
Alex Kolbasov (akolb1) wrote :

What are serialization guarantees provided by Cinder in regards to driver operations?

What you are saying essentially means that a driver can't manage its own mounts reliably. In my case there is a mount per Cinder volume, so in a presense of many volumes it may be a lot of NFS mounts, most of which are not needed.

Revision history for this message
Eric Harney (eharney) wrote :

Cinder doesn't provide many, but you can guarantee such things by adding locks to methods in your driver. If you are doing mounts per-volume, that should be easier than the case I was discussing, where many volumes are on the same mount.

Revision history for this message
Alex Kolbasov (akolb1) wrote :

I have a workaround for my driver:

- I use remotefs brick for my mounts to use the same mountpoints. As a result, if a volume is already mounted by a driver, it wouldn't be mounted again by a remotefs brick.

- I explicitly mount volume in create_export() and unmount it in remove_export(). As a result, Cinder will skip already mounted volume and it will be properly unmounted when remove_export is called.

- Also I explicitly unmount a volume on delete. This is safe since unmounting a volume that isn't mounted is fine so it works in case something else mounts a driver - as long as it goes through remotefs brick.

This addresses the problem for me.

I still find it weird that the driver framework goes around mounting volumes that it doesn't manage and/or understand (and doesn't handle locking)

Changed in cinder:
assignee: nobody → Sachin Yede (yede-sachin45)
Revision history for this message
Sachin Yede (yede-sachin45) wrote :

Hi Alex,

I tried to regenerate this bug at my end, for which I tried following steps:

Setup details:
Single-node(Devstack)
Release: Liberty
Cinder Backend: NFS

After Creating a volume using Image I looked for NFS mounts on my setup, but I didn't found any NFS mount for new volume.
For checking NFS mounts I used "df -k" command on my system.

So can you please help me in regenerating this at my end.

Revision history for this message
Alex Kolbasov (akolb1) wrote :

Hi Sachin,

Regular NFS drivers mount all the shares listed in the config and keep them mounted at at all times, so there are no mounts per volume, only a mount per exported share.

In my case I am using a share per volume and only mount remote FS when it is needed, so regularly there are no mounts kept. You can't reproduce it with a regular NFS driver which behaves differently. The problem is apparent from code inspection - there is no unmount in disconnect_volume function.

Revision history for this message
Sean McGinnis (sean-mcginnis) wrote : Owner Expired

Unassigning due to no activity.

Changed in cinder:
assignee: Sachin Yede (yede-sachin45) → nobody
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.