Failed to operate legacy cnf

Bug #1920085 reported by Ayumu Ueha
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
tacker
Fix Released
Medium
Ayumu Ueha

Bug Description

When I performed cnf operation (legacy) according to the procedure described in the document [1], the following error occurred.

1. vnf create error caused by policies parameter
2. vnf scale error due to key error of policy[]
3. no retry interval during vnf delete wait

[1. vnf create error caused by policies parameter]
==================================================
When vnf create operation, the following error occurred.

$ openstack vnf descriptor create --vnfd-file tosca-vnfd-containerized.yaml VNFD1
tosca-parser failed: -
The pre-parsed input failed validation with the following error(s):
        MissingRequiredFieldError: "properties" of template "SP1" is missing required field "['increment', 'default_instances']".
                File /usr/local/lib/python3.6/dist-packages/eventlet/greenthread.py, line 221, in main

This error is due to a change that now causes tosca-parser to check the required parameter of policies [2].

To avoid the above error, I added the required parameter to VNFD policies and re-executed create, but the following error occurred.

$ openstack vnf create --vnfd-name VNFD1 --vim-name vim-kubernetes VNF1
Found unsupported keys for ['default_instances', 'increment']

To avoid this "Found unsupported keys" error, it is necessary to remove the checking for unsupported keys of policies [3].

[2. vnf scale error due to key error of policy[]]
=================================================
After I fixed above create error, It failed to scale vnf because of following error.

$ openstack vnf scale --scaling-policy-name SP1 --scaling-type out VNF1
Request Failed: internal server error while processing your request.

[req-1ef50574-e3c8-4423-bc68-61daf38ecafc admin admin] create failed: No details.: KeyError: 'vnf_instance_id'
Traceback (most recent call last):
  File "/opt/stack/tacker/tacker/api/v1/resource.py", line 77, in resource
    result = method(request=request, **args)
  File "/opt/stack/tacker/tacker/api/v1/base.py", line 394, in create
    obj = obj_creator(request.context, **kwargs)
  File "/opt/stack/tacker/tacker/vnfm/plugin.py", line 912, in create_vnf_scale
    self._handle_vnf_scaling(context, policy_)
  File "/opt/stack/tacker/tacker/vnfm/plugin.py", line 850, in _handle_vnf_scaling
    last_event_id = _vnf_policy_action()
  File "/opt/stack/tacker/tacker/vnfm/plugin.py", line 814, in _vnf_policy_action
    _handle_vnf_scaling_post(constants.ERROR)
  File "/usr/local/lib/python3.6/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
    self.force_reraise()
  File "/usr/local/lib/python3.6/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
    six.reraise(self.type_, self.value, self.tb)
  File "/usr/local/lib/python3.6/dist-packages/six.py", line 703, in reraise
    raise value
  File "/opt/stack/tacker/tacker/vnfm/plugin.py", line 802, in _vnf_policy_action
    region_name=region_name
  File "/opt/stack/tacker/tacker/common/driver_manager.py", line 71, in invoke
    return getattr(driver, method_name)(**kwargs)
  File "/opt/stack/tacker/tacker/common/log.py", line 35, in wrapper
    return method(*args, **kwargs)
  File "/opt/stack/tacker/tacker/vnfm/infra_drivers/kubernetes/kubernetes_driver.py", line 1127, in scale
    context, policy['vnf_instance_id'])
KeyError: 'vnf_instance_id'

`policy['vnf_instance_id']` is added for SOL CNF scale route in vnflcmDriver in Wallaby release development [4].

But in legacy route, `vnf_instance_id` is not added to `policy[]`, so this error is occurred.
The current implementation determines legacy or sol based on the existence of vnf_resource.

To avoid this error, it is necessary to change the legacy or sol decision, and I think it's best to use `policy ['vnf_instance_id']`.

[3. no retry interval during vnf delete wait]
=============================================
I found a problem that the wait process ends immediately, because there is no interval of wait retry process [5].

If the deletion finishes immediately, there is no problem, but if it takes time, an error will occur.

Reference
======
[1] https://docs.openstack.org/tacker/latest/user/containerized_vnf_usage_guide.html
[2] https://review.opendev.org/c/openstack/tosca-parser/+/763144
[3] https://opendev.org/openstack/tacker/src/branch/master/tacker/vnfm/infra_drivers/kubernetes/k8s/translate_inputs.py#L192-L193
[4] https://review.opendev.org/c/openstack/tacker/+/765786
[5] https://opendev.org/openstack/tacker/src/branch/master/tacker/vnfm/infra_drivers/kubernetes/kubernetes_driver.py#L928-L930

Ayumu Ueha (ueha)
Changed in tacker:
assignee: nobody → Ayumu Ueha (ueha)
Yasufumi Ogawa (yasufum)
Changed in tacker:
importance: Undecided → Medium
Revision history for this message
Ayumu Ueha (ueha) wrote :

Fix proposed to branch: master
Review: https://review.opendev.org/c/openstack/tacker/+/783626

Changed in tacker:
status: New → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/tacker 5.0.0.0rc3

This issue was fixed in the openstack/tacker 5.0.0.0rc3 release candidate.

Yasufumi Ogawa (yasufum)
Changed in tacker:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/tacker 6.0.0.0rc1

This issue was fixed in the openstack/tacker 6.0.0.0rc1 release candidate.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.