If more than one NFS share is defined, all volumes are allocated on the
same NFS share until it is not eligible anymore. Then a second NFS share
is filled. The code is supposed to split volumes between the different
NFS shares, not fill them one by one.
It is a bug in _find_shares(). If at least one NFS share is eligible,
the "size" parameter (size of the requested volume in gigabytes) is
replaced with the total size in bytes of the share. Because of that,
_find_shares() cannot find more one eligible NFS share.
_find_shares() sorts eligible volumes by available size. Because of this
bug, all volumes are allocated in the same NFS share, instead of being
splitted on different NFS shares.
Reviewed: https:/ /review. openstack. org/121467 /git.openstack. org/cgit/ openstack/ cinder/ commit/ ?id=93de51d57b2 287bfb9b39bc135 1d7a612916e0a0
Committed: https:/
Submitter: Jenkins
Branch: stable/icehouse
commit 93de51d57b2287b fb9b39bc1351d7a 612916e0a0
Author: Victor Stinner <email address hidden>
Date: Mon Sep 15 07:58:13 2014 +0000
Fix NetAppDirectCmo deNfsDriver. _find_shares( )
If more than one NFS share is defined, all volumes are allocated on the
same NFS share until it is not eligible anymore. Then a second NFS share
is filled. The code is supposed to split volumes between the different
NFS shares, not fill them one by one.
It is a bug in _find_shares(). If at least one NFS share is eligible,
the "size" parameter (size of the requested volume in gigabytes) is
replaced with the total size in bytes of the share. Because of that,
_find_shares() cannot find more one eligible NFS share.
_find_shares() sorts eligible volumes by available size. Because of this
bug, all volumes are allocated in the same NFS share, instead of being
splitted on different NFS shares.
The _find_shares() has been removed in Juno, see:
* commit 98aa91b0e271d6f bd9dfe43bf4152f b138a46a89 2cd5f7fa39f1b1a 0ad38075988
* change Ie6f155df7bc1ae
Change-Id: I7b72921a2a45d5 dd0aa969c8c32f7 d815175f3a8
Co-author: "Florent Flament" <email address hidden>
Closes-Bug: #1369426