gluster_native: nfs vol mapped layout: shares created from snapshots aren't started, hence are inaccessible

Bug #1499347 reported by karthick
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Shared File Systems Service (Manila)
Fix Released
Medium
Csaba Henk

Bug Description

In glusterfs driver, with the introduction of volume mapped layout, we have the capability to create shares out of snapshots. Shares created from snapshots are inaccessible today and this needs to be fixed.

We create snapclones in the backend gluster cluster and the manila shares are then created on top of these snapclones. After creation of these snapclones it is required to 'start' them like we do for other gluster volumes. We don't start the volume in manila and thus the backend snap clones are in 'created' state instead of 'started' state. This makes the shares created from snapshots inaccessible. This has to be fixed.

Csaba Henk (chenk)
tags: added: liberty-rc-potential
tags: added: driver glusterfs
Ramana Raja (rraja)
Changed in manila:
status: New → In Progress
assignee: nobody → GlusterFS Drivers (glusterfs-drivers)
Changed in manila:
importance: Undecided → Medium
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to manila (master)

Fix proposed to branch: master
Review: https://review.openstack.org/228772

Changed in manila:
assignee: GlusterFS Drivers (glusterfs-drivers) → Csaba Henk (chenk)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to manila (master)

Reviewed: https://review.openstack.org/228772
Committed: https://git.openstack.org/cgit/openstack/manila/commit/?id=4e4c8759a25fa18384b45f4150000aacfaed035a
Submitter: Jenkins
Branch: master

commit 4e4c8759a25fa18384b45f4150000aacfaed035a
Author: Csaba Henk <email address hidden>
Date: Tue Sep 29 09:49:19 2015 +0200

    glusterfs vol layout: start volume cloned from snapshot

    When handling create_share_from_snapshot with glusterfs
    volume layout, we do a snapshot clone gluster operation
    that gives us a new volume (which will be used to back
    the new share). 'snapshot clone' does not start the
    resultant volume, we have explicitly start it from Manila.
    So far the volume layout code did not bother about it,
    rather the 'vol start' was called from glusterfs-native
    driver. That however broke all other volume layout based
    configs (ie. glusterfs driver with vol layout).

    Fix this now by doing the 'vol start' call in the vol
    layout code.

    Change-Id: I63c13ce468a3227f09e381814f55e8c914fbef95
    Closes-Bug: #1499347

Changed in manila:
status: In Progress → Fix Committed
Changed in manila:
milestone: none → liberty-rc2
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to manila (stable/liberty)

Fix proposed to branch: stable/liberty
Review: https://review.openstack.org/229769

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to manila (stable/liberty)

Reviewed: https://review.openstack.org/229769
Committed: https://git.openstack.org/cgit/openstack/manila/commit/?id=be76d9e404c0f8a9728bccb7b54dffae091d0700
Submitter: Jenkins
Branch: stable/liberty

commit be76d9e404c0f8a9728bccb7b54dffae091d0700
Author: Csaba Henk <email address hidden>
Date: Tue Sep 29 09:49:19 2015 +0200

    glusterfs vol layout: start volume cloned from snapshot

    When handling create_share_from_snapshot with glusterfs
    volume layout, we do a snapshot clone gluster operation
    that gives us a new volume (which will be used to back
    the new share). 'snapshot clone' does not start the
    resultant volume, we have explicitly start it from Manila.
    So far the volume layout code did not bother about it,
    rather the 'vol start' was called from glusterfs-native
    driver. That however broke all other volume layout based
    configs (ie. glusterfs driver with vol layout).

    Fix this now by doing the 'vol start' call in the vol
    layout code.

    Change-Id: I63c13ce468a3227f09e381814f55e8c914fbef95
    Closes-Bug: #1499347
    (cherry picked from commit 4e4c8759a25fa18384b45f4150000aacfaed035a)

tags: added: in-stable-liberty
Thierry Carrez (ttx)
Changed in manila:
status: Fix Committed → Fix Released
Thierry Carrez (ttx)
Changed in manila:
milestone: liberty-rc2 → 1.0.0
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to manila (master)

Fix proposed to branch: master
Review: https://review.openstack.org/235328

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to manila (master)
Download full text (11.9 KiB)

Reviewed: https://review.openstack.org/235328
Committed: https://git.openstack.org/cgit/openstack/manila/commit/?id=ec94d6929ea8e1b4ce10dc91cd4954ece808668c
Submitter: Jenkins
Branch: master

commit f1eded1fbcaf309e1c9a4be3f8a14bd25daa3e46
Author: Gaurang Tapase <email address hidden>
Date: Mon Oct 12 20:47:19 2015 +0530

    Fix usage of dependencies

    Manila is broken is threee places, so fix them:
    1) test 'test_misc' with WebOb 1.5

    WebOb 1.5 was released at 2015-10-11. With this new version,
    webob.exc.WSGIHTTPException() constructor now fails with a KeyError
    when the HTTP status code is 0.

    test_exceptions_raise() of test_misc tries to instantiate all
    exceptions of manila.exception. The problem is that
    ConvertedException uses a default HTTP status code of 0.

    Modify the default HTTP status code of ConvertedException to 400 to
    fix the unit tests.

    2) Add dependency for 'testresources' that is required by migration
    tests.

    3) Remove 2 unit tests related to testing of oslo.policy lib
    functionality that should not be tested in Manila. It started failing
    because under-the-hood behaviour was changed in new realese 0.12.0

    Closes-Bug: #1505153
    Closes-Bug: #1505374

    (cherry picked from commit 9c99814ce5943bd4c33bf3650b832666e31b3411)

    -- squashed with another change to get tests to pass on stable/liberty --

    Fix broken unit tests

    With release of six module version 1.10.0 several our unit tests
    started to fail because of usage of not strict constructions.

    Changes:
    1) Manila unit test
    "manila.tests.share.test_api.ShareAPITestCase.test_extend_quota_error"
    used str for int substitution. So, use int data for int substitution.

    2) Module 'manila.share.drivers.hp.hp_3par_mediator' was using
    LOG.exception function when no traceback were exist it led to
    AttributeError on py34. So, replace all usages of 'LOG.exception'
    with 'LOG.error' where no raised exceptions exist.

    Change-Id: Ic5b37bfb9d939d03f6ff68bc53d134bf9e5f996e
    Closes-Bug: #1503969
    (cherry picked from commit f38b8d4efd1f68f4ea29747f7377e0936f61d89c)

    --

    Change-Id: I0f28f3c3fb2c7eec1bafc3a617344990f86810cf

commit 151b691bb2d4baa436913924df60b8c197f91463
Author: Valeriy Ponomaryov <email address hidden>
Date: Thu Oct 1 12:42:05 2015 +0300

    Fix display of availability-zone for manila-manage command

    Commands "manila-manage service list" and "manila-manage host list"
    were displaying availability zone instance instead of its name.

    Such bug appeared after implementation of Availability zone model.
    So, fix it by providing 'name' field of availability zone model.

    Change-Id: I14c3451380df01853183aed265344b1783c95939
    Closes-Bug: #1499677

commit 77455a5ac6be828e8dfd3e75566eaff2823595d4
Author: Csaba Henk <email address hidden>
Date: Wed Sep 30 14:57:00 2015 +0200

    glusterfs_native: use dynamic-auth option if available

    With dynamic-auth restarting the volume is not necessary
    in deny_access.

    Change-Id: Ic25af1795c279b34370...

Revision history for this message
Doug Hellmann (doug-hellmann) wrote : Fix included in openstack/manila 2.0.0.0b1

This issue was fixed in the openstack/manila 2.0.0.0b1 development milestone.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.