Nested KVM Networking Issue

Bug #1971050 reported by Modyn
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
neutron
Invalid
Undecided
Unassigned

Bug Description

## Host environment
 - Operating system: (ubuntu 20.04 server)
 - OS/kernel version: (5.13.0.40 Generic)
 - Architecture: (64 bit cpu architecture)
 - QEMU version: (latest using sudo apt install virt-manager)

## Emulated/Virtualized environment
 - Operating system: (ubuntu 20.04 server)
 - OS/kernel version: ( 5.13.0.40 Generic)
 - Architecture: (64 bit cpu architecture)

## Description of problem
<!-- Describe the problem, including any error/crash messages seen. -->
Hi,

Inside openstack i have an instance of Ubuntu 20.04 and i have installed KVM ( using virt-manager ) to setup a Virtual Machine ... i have done that and i created a VM of ubuntu 20.04 inside the Openstack Instance but there are networking issue while i set the default parameter as setting up the VM ( i mean the networking is as default to NAT ) , So when the VM is up and running the PING to 8.8.8.8 is available and also ping to google.com is also valid which shows that the DNS is correctly working ... but there is not connectivity with packages while i do sudo apt update, it will not get any package update and also the wget to google.com is shows that its connected to it but it wont able to download!!! the same happen with curl to any other websites...

I'm confirming that the openstack instance has full access to the internet including ping and wget , .... but the VM is not working correctly!

P.S. I have set the ip forwarding, Iptables , ... also disabled firewals but notting changed!!

Would you please fix this ?

## Steps to reproduce
1. creating an openstack instance from ubuntu 20.04 server image
2. updating and upgrading packages setting ip forwarding to 1 ( Enabled), firewall
3. and kernel to 5.13.0.40 and installing virt-manager then reboot
3. creating a VM with default KVM networking ( NAT ) using ubuntu 20.04 server image
4. trying ping, wget, curl , ...

These are my commands after creating an instance with 8VCPU, 16VRAM, 100VDisk, ubuntu cloud 20.04 image:
sudo apt update && sudo apt full-upgrade -y && sudo apt install linux-image-5.13.0-40-generic linux-headers-5.13.0-40-generic -y && sudo reboot
sudo apt update && sudo uname -a
Linux test 5.13.0-40-generic #45~20.04.1-Ubuntu SMP Mon Apr 4 09:38:31 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
sudo apt install virt-manager -y && sudo reboot
sudo systemctl status libvirtd
Its running IP range 192.168.122.2
sudo usermod -a -G libvirt ubuntu
then download ubuntu server 20.04 image from https://releases.ubuntu.com/20.04/ubuntu-20.04.4-live-server-amd64.iso
and create a new VM using KVM by virt-manager as shown bellow:
https://gitlab.com/qemu-project/qemu/uploads/8bd4c7381a60832b3a5fcd9dbd3665de/image.png

qemu-system-x86_64 --version
QEMU emulator version 4.2.1 (Debian 1:4.2-3ubuntu6.21)
Copyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers

Here is my networking :
```
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc fq_codel state UP group default qlen 1000
    link/ether fa:16:3e:10:60:0e brd ff:ff:ff:ff:ff:ff
    altname enp0s3
    inet 10.20.30.52/24 brd 10.20.30.255 scope global dynamic ens3
       valid_lft 34758sec preferred_lft 34758sec
    inet6 fe80::f816:3eff:fe10:600e/64 scope link
       valid_lft forever preferred_lft forever
3: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 52:54:00:98:07:1a brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:98:07:1a brd ff:ff:ff:ff:ff:ff
5: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master virbr0 state UNKNOWN group default qlen 1000
    link/ether fe:54:00:f9:5d:4d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:fef9:5d4d/64 scope link
       valid_lft forever preferred_lft forever
```

And this is my Iptable

```
iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
LIBVIRT_INP all -- anywhere anywhere

Chain FORWARD (policy ACCEPT)
target prot opt source destination
LIBVIRT_FWX all -- anywhere anywhere
LIBVIRT_FWI all -- anywhere anywhere
LIBVIRT_FWO all -- anywhere anywhere

Chain OUTPUT (policy ACCEPT)
target prot opt source destination
LIBVIRT_OUT all -- anywhere anywhere

Chain LIBVIRT_FWI (1 references)
target prot opt source destination
ACCEPT all -- anywhere 192.168.122.0/24 ctstate RELATED,ESTABLISHED
REJECT all -- anywhere anywhere reject-with icmp-port-unreachable

Chain LIBVIRT_FWO (1 references)
target prot opt source destination
ACCEPT all -- 192.168.122.0/24 anywhere
REJECT all -- anywhere anywhere reject-with icmp-port-unreachable

Chain LIBVIRT_FWX (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere

Chain LIBVIRT_INP (1 references)
target prot opt source destination
ACCEPT udp -- anywhere anywhere udp dpt:domain
ACCEPT tcp -- anywhere anywhere tcp dpt:domain
ACCEPT udp -- anywhere anywhere udp dpt:bootps
ACCEPT tcp -- anywhere anywhere tcp dpt:67

Chain LIBVIRT_OUT (1 references)
target prot opt source destination
ACCEPT udp -- anywhere anywhere udp dpt:domain
ACCEPT tcp -- anywhere anywhere tcp dpt:domain
ACCEPT udp -- anywhere anywhere udp dpt:bootpc
ACCEPT tcp -- anywhere anywhere tcp dpt:68
```

I think this is a bug because i have configured the same ssttings on baremetal system and it was completely OK ... but here when i use the OPENSTACK Instance the problem occures! ( Actually i think this problem happen in Nested KVM situation!)

I would be glad to hear about hint on how to solve this issue!

Thanks
Best regards

Revision history for this message
yatin (yatinkarel) wrote :

Hi Modyn,

Considering the scenarios you mentioned it doesn't look related to nested virtualization, i suspect it's related to MTU setting inside the nested vm.
In Your openstack instance i see MTU is 1442(2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc fq_codel state UP group default qlen 1000).

And likely your nested vm would have MTU > 1442(i guess it is 1500 which is default). If it is true can you try your curl, wget or apt update tests after lowering the mtu(lower than 1442, can try 1400) on nested vm where you see the issue(ip link set dev <primary interface> mtu 1400).

Revision history for this message
Modyn (modyngs) wrote :

@yatin (yatinkarel)
Hi and Thanks for the replay

I have set the MTU to 1400 but there are the same problem and it happened again...

Here is my ip a:

ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group defaul t qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc fq_codel state UP grou p default qlen 1000
    link/ether fa:16:3e:10:60:0e brd ff:ff:ff:ff:ff:ff
    altname enp0s3
    inet 10.20.30.52/24 brd 10.20.30.255 scope global dynamic ens3
       valid_lft 39767sec preferred_lft 39767sec
    inet6 fe80::f816:3eff:fe10:600e/64 scope link
       valid_lft forever preferred_lft forever
3: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP gro up default qlen 1000
    link/ether 52:54:00:98:07:1a brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1400 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:98:07:1a brd ff:ff:ff:ff:ff:ff
5: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc fq_codel master virbr 0 state UNKNOWN group default qlen 1000
    link/ether fe:54:00:9e:90:77 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:fe9e:9077/64 scope link
       valid_lft forever preferred_lft forever

Hope to get some hints ...
Thanks
Best regards

Revision history for this message
yatin (yatinkarel) wrote :

@Modyn, ok so may be some other issue than.

I tried to reproduce as per the steps shared by you on an OpenStack VM, but it didn't reproduce for me. I used everything default i.e haven't changed anything related to ip forwarding, Iptables, firewalls etc in both OpenStack VM and the nested vm.

In your case just to rule out port security, may be you can disable port security on the OpenStack VM port and check if it helps. It shouldn't be related though as connectivity working fine on your OpenStack, but just to rule out it good to confirm.

If still fails can you also share details related to your OpenStack setup(which version, multi-node single node, networking backend etc), along with verbose output from curl, wget, apt update which fails, can also run tcpdumps also to check what is happening with the ping and curl packets and share those outputs, or if any custom setting you did on your openstack vm and nested vm you did please do also share that. May be then some one can help you out.

Revision history for this message
Modyn (modyngs) wrote :

Dear @yatin (yatinkarel)

Thanks for the replay

The problem solved by MTU size to 1442
Not just in host interfaces but in XML files of the virt-manager:
     <model type='virtio'/>
      <mtu size='1442'/>

Thanks
Best regards

Revision history for this message
yatin (yatinkarel) wrote :

@Modyn, Ok Thanks for confirming, will close the bug as INVALID for neutron as i don't see anything to fix for this on neutron side.

Changed in neutron:
status: New → Incomplete
status: Incomplete → Invalid
Revision history for this message
Modyn (modyngs) wrote :

Sure,

Thanks Again for your help
Best regards

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.