specifying the same storage adapter port twice causes failure

Bug #1708165 reported by Andreas Scheuring
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
nova-dpm
New
Undecided
Unassigned

Bug Description

The physical_storage_adapter_mappings config option is a multiline config option. It allows settings like this:

  [dpm]
  physical_storage_adapter_mappings = oid1:0
  physical_storage_adapter_mappings = oid2:1

The nova-dpm code will then get a list [(oid1:0), (oid2:1)]. Our code fails when someone provides the following setting:

  [dpm]
  physical_storage_adapter_mappings = oid1:0
  physical_storage_adapter_mappings = oid1:0

-> the same adapter is specified twice

The failure is

"HTTPError: 400,8: The value of name does not provide a unique value for the corresponding data model property as required". More details see [1]

Question: Is it a valid use case that one specifies the same adapter port twice?

If yes - we need to modify our attach_hbas code to be able to deal with it
If not - we should detect such an invalid config early when the config file is parsed (ideally in the definition of the config option itself)

[1]
ERROR nova.compute.manager [instance: 190f4521-1e0c-4319-b1b1-37f3cb35cae6] File "/opt/stack/nova-dpm/nova_dpm/virt/dpm/driver.py", line 322, in prep_for_spawn
ERROR nova.compute.manager [instance: 190f4521-1e0c-4319-b1b1-37f3cb35cae6] inst.attach_hbas()
ERROR nova.compute.manager [instance: 190f4521-1e0c-4319-b1b1-37f3cb35cae6] File "/opt/stack/nova-dpm/nova_dpm/virt/dpm/vm.py", line 210, in attach_hbas
ERROR nova.compute.manager [instance: 190f4521-1e0c-4319-b1b1-37f3cb35cae6] hba = self.partition.hbas.create(dpm_hba_dict)
ERROR nova.compute.manager [instance: 190f4521-1e0c-4319-b1b1-37f3cb35cae6] File "<decorator-gen-38>", line 2, in create
ERROR nova.compute.manager [instance: 190f4521-1e0c-4319-b1b1-37f3cb35cae6] File "/usr/local/lib/python2.7/dist-packages/zhmcclient/_logging.py", line 202, in log_api_call
ERROR nova.compute.manager [instance: 190f4521-1e0c-4319-b1b1-37f3cb35cae6] result = func(*args, **kwargs)
ERROR nova.compute.manager [instance: 190f4521-1e0c-4319-b1b1-37f3cb35cae6] File "/usr/local/lib/python2.7/dist-packages/zhmcclient/_hba.py", line 188, in create
ERROR nova.compute.manager [instance: 190f4521-1e0c-4319-b1b1-37f3cb35cae6] body=properties)
ERROR nova.compute.manager [instance: 190f4521-1e0c-4319-b1b1-37f3cb35cae6] File "<decorator-gen-19>", line 2, in post
ERROR nova.compute.manager [instance: 190f4521-1e0c-4319-b1b1-37f3cb35cae6] File "/usr/local/lib/python2.7/dist-packages/zhmcclient/_logging.py", line 202, in log_api_call
ERROR nova.compute.manager [instance: 190f4521-1e0c-4319-b1b1-37f3cb35cae6] result = func(*args, **kwargs)
ERROR nova.compute.manager [instance: 190f4521-1e0c-4319-b1b1-37f3cb35cae6] File "/usr/local/lib/python2.7/dist-packages/zhmcclient/_session.py", line 905, in post
ERROR nova.compute.manager [instance: 190f4521-1e0c-4319-b1b1-37f3cb35cae6] raise HTTPError(result_object)
ERROR nova.compute.manager [instance: 190f4521-1e0c-4319-b1b1-37f3cb35cae6] HTTPError: 400,8: The value of name does not provide a unique value for the corresponding data model property as required [POST /api/partitions/c8e99f5e-7758-11e7-8cd2-42f2e9ef1641/hbas]

summary: - duplicate physical_storage_adapter_mappings causes faileures
+ specifying the same storage adapter port twice causes failure
Revision history for this message
Andreas Scheuring (andreas-scheuring) wrote :

We clarified in the internal Meeting on 2017-09-28 that this is not a use case. Now it's possible to discuss a solution...

I tend to keep the multiline behavior (we decided to go with it for readability; networking dpm is using the same). We could either add this extra piece of validation into the drivers constructor, or we create a new mulitline config that inherits from the existing multiline but does this extra validation...

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.