incorrect access to other_config record of the open_vswitch table in configure_ovs_dpdk() and configure_ovs_hw_offload()

Bug #1922335 reported by Bayani Carbone
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
charm-ovn-chassis
Triaged
Medium
Unassigned

Bug Description

In charm-layer-ovn/lib/charms/ovn_charm.py, access to the other_config record of the open_vswitch table is not done using the correct syntax with regards to the format of the data structure.
This affects the methods ``configure_ovs_dpdk`` and ``configure_ovs_hw_offload``.

Currently the following code is used to access the record (this is for the hw_offload case but it's similar for the ovs_dpdk case):

    def configure_ovs_hw_offload(self):
        """Configure hardware offload specific bits in Open vSwitch.

        :returns: Whether something changed
        :rtype: bool
        """
        something_changed = False
        opvs = ch_ovsdb.SimpleOVSDB('ovs-vsctl').open_vswitch
        other_config_fmt = 'other_config:{}'
        for row in opvs:
            for k, v in (('hw-offload', 'true'),
                         ('max-idle', '30000'),
                         ):
                if row.get(other_config_fmt.format(k)) != v:
                    something_changed = True
                    if v:
                        opvs.set('.', other_config_fmt.format(k), v)
                    else:
                        opvs.remove('.', 'other_config', k)
        return something_changed

``row.get(other_config_fmt.format(k))`` assumes that the data structure format is for example:
{'other_config:hw-offload': 'true', 'other_config:max-idle': '30000'}

However, the data structure format is in fact a nested dictionary like this:
{'other_config': {'hw-offload': 'true', 'max-idle': '30000'}}

Thus to access a value in the nested dictionary the following should be used:
``row.get('other_config').get(k)``

This bug does not break anything however, because the condition ``row.get(other_config_fmt.format(k)) != v`` always evaluates to true and since we currently set default values for all options ``opvs.set('.', other_config_fmt.format(k), v)`` is executed in all cases.

However, if we want to use the ``else`` block in the future this will need to be fixed.

For the configure_ovs_dpdk() case a fix is in progress in the following PR:
https://github.com/openstack-charmers/charm-layer-ovn/pull/41

Note: hw_offload configuration test cases will also need to be adapted.

Debug output:
./hooks/config-changed
tmux kill-session -t ovn-chassis-dpdk/2 # or, equivalently, CTRL+a d

4. CTRL+a is tmux prefix.

More help and info is available in the online documentation:
https://discourse.jujucharms.com/t/debugging-charm-hooks

root@node07:/var/lib/juju/agents/unit-ovn-chassis-dpdk-2/charm# vi lib/charms/ovn_charm.py
root@node07:/var/lib/juju/agents/unit-ovn-chassis-dpdk-2/charm# ./hooks/config-changed
none
> /var/lib/juju/agents/unit-ovn-chassis-dpdk-2/charm/lib/charms/ovn_charm.py(518)configure_ovs_dpdk()
-> if row.get('other_config').get(k) != v:
(Pdb) row
{'_uuid': UUID('85e8f98e-2ac3-42c5-811c-1220a28b5948'), 'bridges': UUID('bbc43e5e-79e0-4f9b-a717-337b196154e2'), 'cur_cfg': 28, 'datapath_types': ['netdev', 'system'], 'datapaths': {}, 'db_version': '8.2.0', 'dpdk_initialized': True, 'dpdk_version': 'DPDK 19.11.3', 'external_ids': {'hostname': 'node07.maas', 'ovn-encap-ip': '172.27.12.45', 'ovn-encap-type': 'geneve', 'ovn-remote': 'ssl:172.27.12.206:6642,ssl:172.27.12.198:6642,ssl:172.27.12.193:6642', 'rundir': '/var/run/openvswitch', 'system-id': 'node07.maas'}, 'iface_types': ['dpdk', 'dpdkr', 'dpdkvhostuser', 'dpdkvhostuserclient', 'erspan', 'geneve', 'gre', 'internal', 'ip6erspan', 'ip6gre', 'lisp', 'patch', 'stt', 'system', 'tap', 'vxlan'], 'manager_options': UUID('c909af8e-6046-4394-a84c-9c56eaaafc61'), 'next_cfg': 28, 'other_config': {'dpdk-init': 'true', 'dpdk-lcore-mask': '0x01', 'dpdk-socket-mem': '1024', 'pmd-cpu-mask': '0x9'}, 'ovs_version': '2.13.1', 'ssl': UUID('cef62424-0ed7-484a-b665-893e22a0fb83'), 'statistics': {}, 'system_type': 'ubuntu', 'system_version': '20.04'}
(Pdb) row.get('other_config')
{'dpdk-init': 'true', 'dpdk-lcore-mask': '0x01', 'dpdk-socket-mem': '1024', 'pmd-cpu-mask': '0x9'}
(Pdb) row.get('other_config').get('pmd-cpu-mask')
'0x9'
(Pdb) row.get('other_config:pmd-cpu-mask')
(Pdb)

description: updated
Revision history for this message
Alex Kavanagh (ajkavanagh) wrote :

Triaged: Yes, definitely bug in the code as on master on date of comment.

Schedule to fix ASAP from the equivalent charm-helpers code that Bayani is currently working on.

Thanks for raising the bug, Bayani!

Changed in charm-ovn-chassis:
importance: Undecided → Critical
status: New → Triaged
Revision history for this message
Frode Nordahl (fnordahl) wrote :

Thank you for the bug and triage. While there is no doubt this is a bug, I'm not quite convinced about the currently triaged importance. The age of the bug and absence of merged fix is a testament to that.

A fix is underway and will be picked up as part of ongoing work scheduled to complete soon.

Changed in charm-ovn-chassis:
importance: Critical → Medium
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.