VMware: Thick type volumes getting attached as thin volume over nfs datastore

Bug #1293955 reported by satyadev svn
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Cinder
Won't Fix
Wishlist
Vipin Balachandran

Bug Description

created volume of type thick .Attach thick volume to instance.

As per current logic of Data store with highest (free_space/total_space) value. Thick volume landing to nfs datastore. As it is non-vaai-nas NFS data store won't support thick volumes hence volumes getting attached as thin volume.

so if cluster having nfs datastore with highest 'free_space/total_space' value over VMFS, VSAN data store then we end up attaching thin volumes to instance instead of thick

Tags: drivers vmware
Changed in cinder:
status: New → Confirmed
importance: Undecided → Wishlist
assignee: nobody → Vipin Balachandran (vbala)
tags: added: drivers
summary: - Thick type volumes getting attached as thin volume over nfs datastore
+ VMware: Thick type volumes getting attached as thin volume over nfs
+ datastore
tags: removed: vmdk
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to cinder (master)

Fix proposed to branch: master
Review: https://review.openstack.org/113862

Changed in cinder:
status: Confirmed → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to cinder (master)

Reviewed: https://review.openstack.org/113862
Committed: https://git.openstack.org/cgit/openstack/cinder/commit/?id=2c672d1100ad4f44517838e17156b3ea6300b1cc
Submitter: Jenkins
Branch: master

commit 2c672d1100ad4f44517838e17156b3ea6300b1cc
Author: Vipin Balachandran <email address hidden>
Date: Fri Aug 22 19:23:11 2014 +0530

    VMware: Improve datastore selection logic

    The current datastore selection logic is not modularized and difficult
    to extend. The retype API implementation has a requirement to specify
    hard anti-affinity requirement with the current backing datastore. Some
    of the bug fixes also need to specify hard affinity with one or more
    datastore types. To support such requirements and to enable future
    extensions, this patch introduces a new module which contains datastore
    selection logic. The existing code for datastore selection is reused as
    much as possible. The dependency on existing datastore selection logic
    will be removed in a separate patch.

    The current datastore selection iterates over a list of hosts, for each
    host, queries the connected valid datastores and tries to select a
    suitable datastore. The filtering is based on space, storage profile,
    number of connected hosts and space utilization. The space utilization
    is used only for breaking ties. If a suitable datastore is found, further
    processing of list of hosts is skipped, which could result in uneven space
    utilization. To solve this, the new selection logic introduces a requirement
    called 'preferred_utilization_threshold' which can be exposed as a driver
    config option.

    Partial-bug: #1275682
    Partial-bug: #1301943
    Partial-bug: #1293955

    Change-Id: I17e90aa09a303fbb8d4ad90037f440c8c4e7d072

Revision history for this message
Sean McGinnis (sean-mcginnis) wrote :

Automatically unassigning due to inactivity.

Changed in cinder:
assignee: Vipin Balachandran (vbala) → nobody
status: In Progress → Triaged
Changed in cinder:
assignee: nobody → Vipin Balachandran (vbala)
Changed in cinder:
status: Triaged → Won't Fix
Revision history for this message
Vipin Balachandran (vbala) wrote :

This is due to the limitation of NFS datastores without hardware acceleration. We can try to skip such datastores if a thick vmdk is required, but that would add complexity to the datastore selection logic.

We can use storage policies in voume type to skip NFS datastores without hardware acceleration if thin vmdks are not desirable.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.