NetApp cheesecake: cinder service-list --withreplication should show disabled if api fails

Bug #1695319 reported by chad morgenstern
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Cinder
New
Undecided
Unassigned

Bug Description

Under any condition in which replication is enabled but api fails, cinder service-list --withreplication needs to show state of disabled rather than enabled.

In the scenarios that follow I created backends where the source and target backends used the same vserver. This is illegal and failed, yet cinder service-list --withreplication reports replication status as enabled. Only by looking in the logs do you see the failure.

Other scenarios that have the same behavior:
1) target and source aggregates are unknown, yet replictaion status shows enabled.
2) snapmirror is not licensed, yet replication status shows enabled.

[root@scsor0012900001 instances(keystone_admin)]# cinder service-list --withreplication
+------------------+---------------------------------------------------+------+---------+-------+----------------------------+--------------------+-------------------+--------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Replication Status | Active Backend ID | Frozen | Disabled Reason |
+------------------+---------------------------------------------------+------+---------+-------+----------------------------+--------------------+-------------------+--------+-----------------+
| cinder-scheduler | scsor0012900001.rtp.openenglab.netapp.com | nova | enabled | up | 2017-06-02T16:15:55.000000 | | | | - |
| cinder-volume | scsor0012900001.rtp.openenglab.netapp.com@lvm | nova | enabled | up | 2017-06-02T16:15:57.000000 | disabled | - | False | - |
| cinder-volume | scsor0012900001.rtp.openenglab.netapp.com@prodnfs | nova | enabled | up | 2017-06-02T16:15:59.000000 | enabled | - | False | - |
+------------------+---------------------------------------------------+------+---------+-------+----------------------------+--------------------+-------------------+--------+-----------------+

Yet replication is not occuring becuase the vserver origin and target are the same vserver and as such the volume names cannot be replicated

[prodnfs]
volume_backend_name=prodnfs
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
netapp_server_hostname=192.168.12.10
netapp_server_port=80
netapp_storage_protocol=nfs
netapp_storage_family=ontap_cluster
netapp_login=admin
netapp_password=*******
netapp_vserver=prod
nfs_shares_config=/etc/cinder/prod_mounts
netapp_pool_name_search_pattern = *CHAD_PROD* # match all r/w FlexVols on the vServer
replication_device = backend_id:drnfs
netapp_replication_aggregate_map = backend_id:drnfs,stlfas2552_7_8_01_AggrGroup1_1:stlfas2552_7_8_02_AggrGroup2_1

[drnfs]
volume_backend_name=drnfs
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
netapp_server_hostname=192.168.19.10
netapp_server_port=80
netapp_storage_protocol=nfs
netapp_storage_family=ontap_cluster
netapp_login=admin
netapp_password=******
netapp_vserver=prod
nfs_shares_config=/etc/cinder/prod_mounts
netapp_pool_name_search_pattern = *CHAD_PROD* # match all r/w FlexVols on the vServer

As the logs show
2017-06-02 12:14:57.042 17894 INFO cinder.volume.manager [req-ad6e441e-b028-4e2d-898d-61b3450f7a2f - - - - -] Initializing RPC dependent components of volume driver LVMVolumeDriver (3.0.0)
2017-06-02 12:14:57.547 17894 INFO cinder.volume.manager [req-ad6e441e-b028-4e2d-898d-61b3450f7a2f - - - - -] Driver post RPC initialization completed successfully.
2017-06-02 12:14:57.679 17898 ERROR oslo.service.loopingcall [-] Fixed interval looping call 'cinder.volume.drivers.netapp.dataontap.nfs_cmode.NetAppCmodeNfsDriver._handle_housekeeping_tasks' failed
2017-06-02 12:14:57.679 17898 ERROR oslo.service.loopingcall Traceback (most recent call last):
2017-06-02 12:14:57.679 17898 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 137, in _run_loop
2017-06-02 12:14:57.679 17898 ERROR oslo.service.loopingcall result = func(*self.args, **self.kw)
2017-06-02 12:14:57.679 17898 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 827, in trace_method_logging_wrapper
2017-06-02 12:14:57.679 17898 ERROR oslo.service.loopingcall return f(*args, **kwargs)
2017-06-02 12:14:57.679 17898 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 827, in trace_method_logging_wrapper
2017-06-02 12:14:57.679 17898 ERROR oslo.service.loopingcall return f(*args, **kwargs)
2017-06-02 12:14:57.679 17898 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/netapp/dataontap/nfs_cmode.py", line 165, in _handle_housekeeping_tasks
2017-06-02 12:14:57.679 17898 ERROR oslo.service.loopingcall self.ssc_library.get_ssc_flexvol_names())
2017-06-02 12:14:57.679 17898 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/netapp/dataontap/utils/data_motion.py", line 467, in ensure_snapmirrors
2017-06-02 12:14:57.679 17898 ERROR oslo.service.loopingcall dest_flexvol_name)
2017-06-02 12:14:57.679 17898 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/netapp/dataontap/utils/data_motion.py", line 170, in create_snapmirror
2017-06-02 12:14:57.679 17898 ERROR oslo.service.loopingcall schedule='hourly')
2017-06-02 12:14:57.679 17898 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 827, in trace_method_logging_wrapper
2017-06-02 12:14:57.679 17898 ERROR oslo.service.loopingcall return f(*args, **kwargs)
2017-06-02 12:14:57.679 17898 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 827, in trace_method_logging_wrapper
2017-06-02 12:14:57.679 17898 ERROR oslo.service.loopingcall return f(*args, **kwargs)
2017-06-02 12:14:57.679 17898 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/netapp/dataontap/client/client_cmode.py", line 1999, in create_snapmirror
2017-06-02 12:14:57.679 17898 ERROR oslo.service.loopingcall self.send_request('snapmirror-create', api_args)
2017-06-02 12:14:57.679 17898 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 827, in trace_method_logging_wrapper
2017-06-02 12:14:57.679 17898 ERROR oslo.service.loopingcall return f(*args, **kwargs)
2017-06-02 12:14:57.679 17898 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/netapp/dataontap/client/client_base.py", line 90, in send_request
2017-06-02 12:14:57.679 17898 ERROR oslo.service.loopingcall return self.connection.invoke_successfully(request, enable_tunneling)
2017-06-02 12:14:57.679 17898 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/netapp/dataontap/client/api.py", line 222, in invoke_successfully
2017-06-02 12:14:57.679 17898 ERROR oslo.service.loopingcall raise NaApiError(code, msg)
2017-06-02 12:14:57.679 17898 ERROR oslo.service.loopingcall NaApiError: NetApp API failed. Reason - 17104:Source prod:OPENSTACK_RDO_openstack_01_AggrGroup1_1_OPENSTACK_CHAD_PROD_2 cannot be the same as the destination volume.
2017-06-02 12:14:57.679 17898 ERROR oslo.service.loopingcall
2017-06-02 12:14:58.632 17898 INFO cinder.volume.manager [req-b1bdc590-f7d1-4b90-be64-1ec378e00ebe - - - - -] Driver initialization completed successfully.
2017-06-02 12:14:58.637 17898 INFO cinder.manager [req-b1bdc590-f7d1-4b90-be64-1ec378e00ebe - - - - -] Initiating service 3 cleanup
2017-06-02 12:14:58.641 17898 INFO cinder.manager [req-b1bdc590-f7d1-4b90-be64-1ec378e00ebe - - - - -] Service 3 cleanup completed.
2017-06-02 12:14:58.648 17898 WARNING py.warnings [req-b1bdc590-f7d1-4b90-be64-1ec378e00ebe - - - - -]

Tags: driver netapp
tags: added: driver netapp
description: updated
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.