networkd-dispatcher[1079401]: ERROR:Unknown interface index 88 seen even after reload

Bug #1972664 reported by satheesh
26
This bug affects 5 people
Affects Status Importance Assigned to Milestone
networkd-dispatcher (Ubuntu)
Confirmed
Medium
Unassigned
Focal
Confirmed
Undecided
Unassigned

Bug Description

May 9 03:29:05 networkd-dispatcher[2150463]: WARNING:Unknown index 90 seen, reloading interface list

May 9 03:30:17 networkd-dispatcher[2150463]: ERROR:Unknown interface index 90 seen even after reload
May 9 03:30:17 networkd-dispatcher[2150463]: WARNING:Unknown index 90 seen, reloading interface list
May 9 03:30:17 networkd-dispatcher[2150463]: ERROR:Unknown interface index 90 seen even after reload

ProblemType: Bug
DistroRelease: Ubuntu 20.04
Package: networkd-dispatcher 2.1-2~ubuntu20.04.3
ProcVersionSignature: Ubuntu 5.4.0-107.121-generic 5.4.174
Uname: Linux 5.4.0-107-generic x86_64
ApportVersion: 2.20.11-0ubuntu27.23
Architecture: amd64
CasperMD5CheckResult: pass
Date: Mon May 9 07:36:49 2022
ExecutablePath: /usr/bin/networkd-dispatcher
InstallationDate: Installed on 2022-03-08 (61 days ago)
InstallationMedia: Ubuntu-Server 20.04.4 LTS "Focal Fossa" - Release amd64 (20220223.1)
InterpreterPath: /usr/bin/python3.8
PackageArchitecture: all
ProcEnviron:
 LANG=en_US.UTF-8
 PATH=(custom, no user)
Python3Details: /usr/bin/python3.8, Python 3.8.10, python3-minimal, 3.8.2-0ubuntu2
PythonDetails: N/A
SourcePackage: networkd-dispatcher
UpgradeStatus: No upgrade log present (probably fresh install)

Revision history for this message
satheesh (satheesh-rvs) wrote :
Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in networkd-dispatcher (Ubuntu):
status: New → Confirmed
Revision history for this message
Ross Patterson (rossp) wrote :
Download full text (8.2 KiB)

For some context, for me this is happening about every minute. I suspect it's related to Docker somehow. I have a set of stable containers (not restarting, the same set of containers running for hours) and I see these recurring log messages around a bunch of other container-related messages. I'm having a hard time figuring out the beginning of the loop from the timestamps so here's a `/var/log/syslog` snippet of one cycle but I don't know if it starts at the right place:

```
...
Jan 29 15:21:33 localhost systemd[1]: run-docker-runtime\x2drunc-moby-c83a40b7d0f954c20d5f215e197c43c302a23fa1bcd5d815cd2af853e08dd640-runc.0ZIjFd.mount: Deactivated successfully.
Jan 29 15:21:33 localhost systemd[1]: run-docker-runtime\x2drunc-moby-417201f6d792614d1d95a3160989ca55fab185df0cf6aec34304173d09be9a73-runc.rIjQ0e.mount: Deactivated successfully.
Jan 29 15:21:58 localhost systemd[1]: run-docker-runtime\x2drunc-moby-417201f6d792614d1d95a3160989ca55fab185df0cf6aec34304173d09be9a73-runc.UUBKdH.mount: Deactivated successfully.
Jan 29 15:22:18 localhost systemd[1]: run-docker-runtime\x2drunc-moby-1dc5d8cb7c171e42a5a4c665d64cdb361ce5ce235eb57c3ed1877482b4bc5969-runc.lYaK7r.mount: Deactivated successfully.
Jan 29 15:22:18 localhost systemd[1]: run-docker-runtime\x2drunc-moby-068c7ed2f83c60236ccbcc3ce46740c8cc883941ca622248705d0d3621c5e6e0-runc.9Jjn0m.mount: Deactivated successfully.
Jan 29 15:22:18 localhost systemd[1]: run-docker-runtime\x2drunc-moby-417201f6d792614d1d95a3160989ca55fab185df0cf6aec34304173d09be9a73-runc.L0RQO3.mount: Deactivated successfully.
Jan 29 15:22:20 localhost systemd-networkd[159]: veth91c7a22: Link UP
Jan 29 15:22:20 localhost networkd-dispatcher[220]: WARNING:Unknown index 1330 seen, reloading interface list
Jan 29 15:22:20 localhost kernel: [20600.521624] br-e80ace13ece7: port 1(veth91c7a22) entered blocking state
Jan 29 15:22:20 localhost kernel: [20600.521627] br-e80ace13ece7: port 1(veth91c7a22) entered disabled state
Jan 29 15:22:20 localhost kernel: [20600.521742] device veth91c7a22 entered promiscuous mode
Jan 29 15:22:20 localhost kernel: [20600.521832] br-e80ace13ece7: port 1(veth91c7a22) entered blocking state
Jan 29 15:22:20 localhost kernel: [20600.521835] br-e80ace13ece7: port 1(veth91c7a22) entered forwarding state
Jan 29 15:22:20 localhost kernel: [20600.522086] br-e80ace13ece7: port 1(veth91c7a22) entered disabled state
Jan 29 15:22:20 localhost dockerd[1066]: time="2023-01-29T15:22:20.074954374-08:00" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]"
Jan 29 15:22:20 localhost dockerd[1066]: time="2023-01-29T15:22:20.075007074-08:00" level=info msg="IPv6 enabled; Adding default IPv6 external servers: [nameserver 2001:4860:4860::8888 nameserver 2001:4860:4860::8844]"
Jan 29 15:22:20 localhost systemd-udevd[598489]: Using default interface naming scheme 'v249'.
Jan 29 15:22:20 localhost systemd-udevd[598487]: Using default interface naming scheme 'v249'.
Jan 29 15:22:20 localhost containerd[580]: time="2023-01-29T15:22:20.203356574-08:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.con...

Read more...

Revision history for this message
Trent Lloyd (lathiat) wrote :

I see this alot when OpenStack starts a virtual machine, probably because tap devices are created and discovered.

I suspect but have not yet proven precisely that this creates a lot of netlink traffic that other processes seem to also receive, and in the case of libvirt, sometimes it throws this error, it's always correlated directly with these networkd-dispatcher rescan errors:
libvirtd[8647]: nl_recv returned with error: No buffer space available

Have also less commonly seen the same from livepatch, again directly correlated with these:
canonical-livepatch.canonical-livepatchd[678488]: Error receiving netlink message: no buffer space available.

I need to research the correlation with the netlink buffer errors a bit more to be sure about it. But this error seems to always be thrown anytime an interface is created and should probably be fixed anyway. It seems this is fixed upstream here:

https://gitlab.com/craftyguy/networkd-dispatcher/-/commit/8d09e931abdb3660a514ad9d96d572f135dec1a4

commit 8d09e931abdb3660a514ad9d96d572f135dec1a4
Author: Wenxin Wang <email address hidden>
Date: Mon Jun 6 15:18:08 2022 +0800

    _interface_scan: fix wrong index into iface map

    This typo in the code made networkd-dispatcher always update
    ALL interfaces's state when any new interface is found.

diff --git a/networkd-dispatcher b/networkd-dispatcher
index 0a20361..2df852b 100755
--- a/networkd-dispatcher
+++ b/networkd-dispatcher
@@ -273,7 +273,7 @@ class Dispatcher():
         # Append new interfaces, keeping old ones around to avoid hotplug race
         # condition (issue #20)
         for i in iface_list:
- if i not in self.iface_names_by_idx:
+ if i.idx not in self.iface_names_by_idx:
                 self.iface_names_by_idx[i.idx] = i.name
                 self.ifaces_by_name[i.name] = i
         logger.debug('Performed interface scan; state: %r', self)

tags: added: sts
Changed in networkd-dispatcher (Ubuntu):
importance: Undecided → Medium
Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in networkd-dispatcher (Ubuntu Focal):
status: New → Confirmed
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.