virtio-net-tx-queue-size reflects in nova conf but not for the vm even after a hard reboot

Bug #2026284 reported by Nishant Dash
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Expired
Undecided
Unassigned
OpenStack Nova Compute Charm
Invalid
Undecided
Unassigned

Bug Description

After modifying the nova compute config options,
- virtio-net-rx-queue-size=512
- virtio-net-tx-queue-size=512

I hard rebooted my vm and spawned a new vm and what I see (on both of them) is:
- virsh xml
```
# virsh dumpxml 2 | grep -i queue
      <driver name='vhost' rx_queue_size='512'/>
```

- nova.conf
```
# grep -i queue /etc/nova/nova.conf
tx_queue_size = 512
rx_queue_size = 512
```

- inside the vm
```
root@jammy-135110:~# ethtool -g ens2
Ring parameters for ens2:
Pre-set maximums:
RX: 512
RX Mini: n/a
RX Jumbo: n/a
TX: 256
Current hardware settings:
RX: 512
RX Mini: n/a
RX Jumbo: n/a
TX: 256
```

The RX config gets propagated, but the TX config does not
Please let me know if any more information is needed.

----------------------------------------------------------

env:
- focal ussuri
- nova-compute:
    charm: nova-compute
    channel: ussuri/stable
    revision: 669
- this is a freshly deployed openstack on vms (not on baremetal)
- libvirt: 6.0.0-0ubuntu8.16
- nova-compute-libvirt 21.2.4-0ubuntu2.5
- qemu 4.2-3ubuntu6.27

Nishant Dash (dash3)
description: updated
description: updated
Revision history for this message
Billy Olsen (billy-olsen) wrote :

This does not appear to be a charm issue, but rather it appears to potentially be a nova issue. I can confirm that setting the rx_queue_size and tx_queue_size results in the nova.conf file being updated by the charm, but that the resulting hard rebooted guest does not get the tx_queue_size, only the rx_queue_size.

Changed in nova:
status: New → Incomplete
Revision history for this message
Rafael Lopez (rafael.lopez) wrote :

Based on libvirt docs [1], the device type (not driver) may not be supported. TX only seesm to be supported by the 'vhostuser' type device:

"tx_queue_size
The optional tx_queue_size attribute controls the size of virtio ring ..... For instance, QEMU v2.9 requires value to be a power of two from [256, 1024] range. In addition to that, this may work only for a subset of interface types, e.g. aforementioned QEMU enables this option only for vhostuser type. "

Nova actually only sets tx when the device is type 'vhostuser', via the _set_config_VIFHostDevice() codepath:
https://opendev.org/openstack/nova/src/commit/f0565e84ee9578d6dafd22d57fb0c95cb3984c1e/nova/virt/libvirt/vif.py#L533

[1] https://libvirt.org/formatdomain.html#setting-nic-driver-specific-options

Revision history for this message
Felipe Reyes (freyes) wrote :

Marking charm-nova-compute task as invalid since it is not at fault here (neither nova) based on Rafael's comment. If you disagree, please reopen and state what we needs to be fixed.

Changed in charm-nova-compute:
status: New → Invalid
Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for OpenStack Compute (nova) because there has been no activity for 60 days.]

Changed in nova:
status: Incomplete → Expired
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.