nova boot fails with "--max-count" option
$ nova list
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+
$ nova boot --min-count=3 --max-count=3 --flavor XXflavor --block-device source=image,id=${MY_CENTOS_IMG},dest=volume,size=10,shutdown=remove,bootindex=0 --key-name key-for-internal --security-groups sg-all-from-private-net --availability-zone AZ2 --nic net-id=${MY_WORK_NET} --nic net-id=${MY_PRV_NET} web_ext_az2
$ nova list
+--------------------------------------+------------------------+--------+------------+-------------+----------------------------------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------------------------+--------+------------+-------------+----------------------------------------------------------+
| 168367d2-3a2e-4b62-b78f-f2c2e234c191 | web_ext_az2-1 | ACTIVE | - | Running | WORK-Net=192.168.30.21; Private-Net=192.168.10.24 |
| 98173a45-0947-42ed-bcaa-e175ac8aeb6b | web_ext_az2-2 | ACTIVE | - | Running | WORK-Net=192.168.30.18; Private-Net=192.168.10.27 |
| bbd9ada5-71e8-48a4-9a69-d36dbb5641a9 | web_ext_az2-3 | ERROR | - | NOSTATE | |
+--------------------------------------+------------------------+--------+------------+-------------+----------------------------------------------------------+
<cinder-volume.log>
2018-06-07 08:03:48.787 17974 WARNING cinder.volume.manager [req-8a322777-ae06-4765-807c-d987f049288c 45336d44e7174de5b7d0430639437ccf 7ce30e9231de44009f10b555896105d4 - default default] Task 'cinder.volume.flows.manager.create_volume.ExtractVolumeRefTask;volume:create' (34a3e9a3-fca4-4ff5-8290-fd7c3e3528b7) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None'
2018-06-07 08:03:48.790 17974 WARNING cinder.volume.manager [req-8a322777-ae06-4765-807c-d987f049288c 45336d44e7174de5b7d0430639437ccf 7ce30e9231de44009f10b555896105d4 - default default] Flow 'volume_create_manager' (074daf4a-94d5-4884-b420-6684e061b6a8) transitioned into state 'REVERTED' from state 'RUNNING'
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server [req-8a322777-ae06-4765-807c-d987f049288c 45336d44e7174de5b7d0430639437ccf 7ce30e9231de44009f10b555896105d4 - default default] Exception during message handling
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server File "/opt/stack/venv/cinder-20171028T231402Z/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 133, in _process_incoming
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server File "/opt/stack/venv/cinder-20171028T231402Z/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 150, in dispatch
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args)
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server File "/opt/stack/venv/cinder-20171028T231402Z/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 121, in _do_dispatch
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args)
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server File "/opt/stack/venv/cinder-20171028T231402Z/lib/python2.7/site-packages/cinder/volume/manager.py", line 4367, in create_volume
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server allow_reschedule=allow_reschedule, volume=volume)
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server File "/opt/stack/venv/cinder-20171028T231402Z/lib/python2.7/site-packages/cinder/volume/manager.py", line 635, in create_volume
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server _run_flow()
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server File "/opt/stack/venv/cinder-20171028T231402Z/lib/python2.7/site-packages/cinder/volume/manager.py", line 627, in _run_flow
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server flow_engine.run()
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server File "/opt/stack/venv/cinder-20171028T231402Z/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 247, in run
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server for _state in self.run_iter(timeout=timeout):
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server File "/opt/stack/venv/cinder-20171028T231402Z/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 340, in run_iter
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server failure.Failure.reraise_if_any(er_failures)
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server File "/opt/stack/venv/cinder-20171028T231402Z/lib/python2.7/site-packages/taskflow/types/failure.py", line 336, in reraise_if_any
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server failures[0].reraise()
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server File "/opt/stack/venv/cinder-20171028T231402Z/lib/python2.7/site-packages/taskflow/types/failure.py", line 343, in reraise
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server six.reraise(*self._exc_info)
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server File "/opt/stack/venv/cinder-20171028T231402Z/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server result = task.execute(**arguments)
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server File "/opt/stack/venv/cinder-20171028T231402Z/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py", line 853, in execute
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server **volume_spec)
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server File "/opt/stack/venv/cinder-20171028T231402Z/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py", line 790, in _create_from_image
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server image_service
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server File "/opt/stack/venv/cinder-20171028T231402Z/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py", line 669, in _create_from_image_download
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server image_service)
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server File "/opt/stack/venv/cinder-20171028T231402Z/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py", line 550, in _copy_image_to_volume
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server raise exception.ImageCopyFailure(reason=ex)
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server ImageCopyFailure: Failed to copy image to volume: Bad or unexpected response from the storage volume backend API: Unable to fetch connection information from backend: Error (HTTP 500) OPERATION_FAILED - The operation failed: The volume list 'helion-cp1-c1-m1-mgmtVolumeList1528358624' cannot be created because a volume list with the same name already exists. Enter a different name for the new volume list.
2018-06-07 08:03:48.792 17974 ERROR oslo_messaging.rpc.server
The volume of cinder does not create if this issues occurred.
If I set "hplefthand_debug = true" then following message recorded.
2018-06-07 08:45:57.440 30555 DEBUG cinder.volume.drivers.hpe.hpe_lefthand_iscsi [req-9bb0e9b4-33dc-4956-9a8b-baf8add719d5 45336d44e7174de5b7d0430639437ccf 7ce30e9231de44009f10b555896105d4 - default default] <== initialize_connection: exception (247ms) VolumeBackendAPIException(u'Error (HTTP 500) SERVER_ALREADY_EXISTS - The server with the name "$1" already exists. Use a unique name and try again.',) trace_logging_wrapper /opt/stack/venv/cinder-20171028T231402Z/lib/python2.7/site-packages/cinder/utils.py:901
Increasing max-count specification increases the occurrence frequency.
Might be better to bring this up through your Helion support. It appears there is some configuration or connectivity issue with your storage backend that may have caused a retry that created the same thing twice, but that can be better diagnosed by them and not the upstream community.