test_get_service_by_volume_host_name failed with "cinder-volume not found on host <hostname>#DEFAULT" error

Bug #1734636 reported by Leontii Istomin
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
tempest
Invalid
Undecided
Leontii Istomin

Bug Description

Bug description: test_get_service_by_volume_host_name tempest test (https://github.com/openstack/tempest/blob/master/tempest/api/volume/admin/test_volume_services.py) has been failed because it expects "ctl03" in the output but got "ctl03#DEFAULT" where DEFAULT is cinder backend.
trace:
Traceback (most recent call last):

  File "/home/rally/.rally/verification/verifier-ba7a09b6-90f0-4eee-bc82-740a4532d4e4/repo/tempest/api/volume/admin/test_volume_services.py", line 80, in test_get_service_by_volume_host_name

    'cinder-volume not found on host %s' % hostname)

  File "/usr/local/lib/python2.7/dist-packages/unittest2/case.py", line 845, in assertNotEqual

    raise self.failureException(msg)

AssertionError: 0 == 0 : cinder-volume not found on host ctl03#DEFAULT

Manual trying:
root@ctl01:~# openstack volume create --size 1 listomin-test
...
root@ctl01:~# openstack volume show -f value -c os-vol-host-attr:host listomin-test
ctl03#DEFAULT

So we need to check if we have "#" simbol in the output and cut excess data

Changed in tempest:
assignee: nobody → Leontiy Istomin (listomin)
status: New → In Progress
Revision history for this message
Masayuki Igawa (igawa) wrote :

This is the patch for this bug.
https://review.openstack.org/522952

Revision history for this message
chandan kumar (chkumar246) wrote :
Revision history for this message
Leontii Istomin (listomin) wrote :

Have found that without "enabled_backends" variable in cinder.conf we have the following worning from cinder:
LOG.warning(_LW('Configuration for cinder-volume does not specify '
                       '"enabled_backends", using DEFAULT as backend. '
                       'Support for DEFAULT section to configure drivers '
                       'will be removed in the next release.'))
And therefore we have the following output of "openstack volume show -f value -c os-vol-host-attr:host <volume_name>" command:
ctl01#DEFAULT
And "openstack volume service list"
+------------------+-----------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated At |
+------------------+-----------+------+---------+-------+----------------------------+
| cinder-scheduler | ctl03 | nova | enabled | up | 2017-12-11T11:50:36.000000 |
| cinder-scheduler | ctl01 | nova | enabled | up | 2017-12-11T11:50:39.000000 |
| cinder-scheduler | ctl02 | nova | enabled | up | 2017-12-11T11:50:36.000000 |
| cinder-volume | ctl03 | nova | enabled | up | 2017-12-11T11:46:44.000000 |
| cinder-volume | ctl01 | nova | enabled | up | 2017-12-11T11:46:44.000000 |
| cinder-volume | ctl02 | nova | enabled | up | 2017-12-11T11:46:45.000000 |
+------------------+-----------+------+---------+-------+----------------------------+
We have "volume_backend_name=DEFAULT" in cinder.conf
If we add "enabled_backends=lvm" to cinder conf (as it designed):
openstack volume service list
+------------------+-----------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated At |
+------------------+-----------+------+---------+-------+----------------------------+
| cinder-scheduler | ctl03 | nova | enabled | up | 2017-12-11T11:50:36.000000 |
| cinder-scheduler | ctl01 | nova | enabled | up | 2017-12-11T11:50:39.000000 |
| cinder-scheduler | ctl02 | nova | enabled | up | 2017-12-11T11:50:36.000000 |
| cinder-volume | ctl03 | nova | enabled | down | 2017-12-11T11:46:44.000000 |
| cinder-volume | ctl01 | nova | enabled | down | 2017-12-11T11:46:44.000000 |
| cinder-volume | ctl02 | nova | enabled | down | 2017-12-11T11:46:45.000000 |
| cinder-volume | ctl01@lvm | nova | enabled | up | 2017-12-11T11:50:32.000000 |
| cinder-volume | ctl02@lvm | nova | enabled | up | 2017-12-11T11:50:33.000000 |
| cinder-volume | ctl03@lvm | nova | enabled | up | 2017-12-11T11:50:34.000000 |
+------------------+-----------+------+---------+-------+----------------------------+
openstack volume show -f value -c os-vol-host-attr:host <volume_name>
ctl02@lvm#LVM
So it's true that right configured Cinder service shows host in the following format:
<host_name>@<backend>#<pool>

Revision history for this message
Vishakha Agarwal (vishakha.agarwal) wrote :

Hi,

Is anyone working on this. As I am interested to work on it.

Thanks

Revision history for this message
Vishakha Agarwal (vishakha.agarwal) wrote :

This bug seems invalid as value of u'os-vol-host-attr:host': u'vishakha-VirtualBox@lvmdriver-1#lvmdriver-1'.

And the test case ran successfully.

stack@vishakha-VirtualBox:~/tempest$ tempest run --regex tempest.api.volume.admin.test_volume_services.VolumesServicesTestJSON.test_get_service_by_volume_host_name
{u'migration_status': None, u'attachments': [], u'links': [{u'href': u'http://127.0.0.1/volume/v3/161876e3f11541f79d63db9982871b0a/volumes/51a6437e-9dfb-4a8c-a8d4-2f88de37574e', u'rel': u'self'}, {u'href': u'http://127.0.0.1/volume/161876e3f11541f79d63db9982871b0a/volumes/51a6437e-9dfb-4a8c-a8d4-2f88de37574e', u'rel': u'bookmark'}], u'availability_zone': u'nova', u'os-vol-host-attr:host': u'vishakha-VirtualBox@lvmdriver-1#lvmdriver-1', u'encrypted': False, u'updated_at': u'2018-07-10T07:04:51.000000', u'replication_status': None, u'snapshot_id': None, u'id': u'51a6437e-9dfb-4a8c-a8d4-2f88de37574e', u'size': 1, u'user_id': u'335ceb807dff43c59b7ba1c3966312cf', u'os-vol-tenant-attr:tenant_id': u'00d2f747b68e45149f05bd8b733ec56f', u'os-vol-mig-status-attr:migstat': None, u'metadata': {}, u'status': u'available', u'description': None, u'multiattach': False, u'source_volid': None, u'consistencygroup_id': None, u'os-vol-mig-status-attr:name_id': None, u'name': u'tempest-VolumesServicesTestJSON-Volume-1128546194', u'bootable': u'false', u'created_at': u'2018-07-10T07:04:50.000000', u'volume_type': u'lvmdriver-1'}
{0} tempest.api.volume.admin.test_volume_services.VolumesServicesTestJSON.test_get_service_by_volume_host_name [2.182127s] ... ok

======
Totals
======
Ran: 1 tests in 11.0000 sec.
 - Passed: 1
 - Skipped: 0
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 0
Sum of execute time for each test: 2.1821 sec.

==============
Worker Balance
==============
 - Worker 0 (1 tests) => 0:00:02.182127

Changed in tempest:
status: In Progress → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.