Warning: The unit file, source configuration file or drop-ins of {apt-news,esm-cache}.service changed on disk. Run 'systemctl daemon-reload' to reload units.

Bug #2055239 reported by Nobuto Murata
74
This bug affects 14 people
Affects Status Importance Assigned to Milestone
snapd
New
Undecided
Zygmunt Krynicki
ubuntu-advantage-tools (Ubuntu)
Invalid
Undecided
Unassigned

Bug Description

I recently started seeing the following warning messages when I run `apt update`.

$ sudo apt update
Warning: The unit file, source configuration file or drop-ins of apt-news.service changed on disk. Run 'systemctl daemon-reload' to reload units.
Warning: The unit file, source configuration file or drop-ins of esm-cache.service changed on disk. Run 'systemctl daemon-reload' to reload units.
...

apt-news.service for example is in /lib/systemd/system/apt-news.service and it's a static file managed by the package. Does the package maintenance script call systemd related hooks to reload the config whenever the package gets updated?

$ systemctl cat apt-news.service
# /usr/lib/systemd/system/apt-news.service
# APT News is hosted at https://motd.ubuntu.com/aptnews.json and can include
# timely information related to apt updates available to your system.
...

$ dpkg -S /lib/systemd/system/apt-news.service
ubuntu-pro-client: /lib/systemd/system/apt-news.service

ProblemType: BugDistroRelease: Ubuntu 24.04
Package: ubuntu-pro-client 31.1
ProcVersionSignature: Ubuntu 6.6.0-14.14-generic 6.6.3
Uname: Linux 6.6.0-14-generic x86_64
NonfreeKernelModules: zfs
ApportVersion: 2.28.0-0ubuntu1
Architecture: amd64
CasperMD5CheckResult: pass
CurrentDesktop: ubuntu:GNOME
Date: Wed Feb 28 13:06:35 2024
InstallationDate: Installed on 2024-01-08 (51 days ago)
InstallationMedia: Ubuntu 24.04 LTS "Noble Numbat" - Daily amd64 (20240104)
ProcEnviron:
 LANG=en_US.UTF-8
 PATH=(custom, no user)
 SHELL=/bin/bash
 TERM=xterm-256color
 XDG_RUNTIME_DIR=<set>SourcePackage: ubuntu-advantage-tools
UpgradeStatus: No upgrade log present (probably fresh install)
apparmor_logs.txt:

cloud-id.txt-error:
 Failed running command 'cloud-id' [exit(2)]. Message: REDACTED config part /etc/cloud/cloud.cfg.d/99-installer.cfg, insufficient permissions
 REDACTED config part /etc/cloud/cloud.cfg.d/90-installer-network.cfg, insufficient permissions
 REDACTED config part /etc/cloud/cloud.cfg.d/99-installer.cfg, insufficient permissions
 REDACTED config part /etc/cloud/cloud.cfg.d/90-installer-network.cfg, insufficient permissions
livepatch-status.txt-error: Invalid command specified '/snap/bin/canonical-livepatch status'.
uaclient.conf:
 contract_url: https://contracts.canonical.com
 log_level: debug

Revision history for this message
Nobuto Murata (nobuto) wrote :
information type: Private → Public
tags: removed: need-amd64-retrace
Revision history for this message
Renan Rodrigo (renanrodrigo) wrote :

Hello, Nobuto,

First of all, thanks for reporting this issue.

We did changes to the apt news service file - we added the apparmor profiles and systemd security config there - and no, we didn't reload it by default, which may be causing those warnings.

However, I could not reproduce this behavior. Do you have steps to reproduce it on a fresh system?

I will bring this to the team.

Changed in ubuntu-advantage-tools (Ubuntu):
status: New → Incomplete
Revision history for this message
Nobuto Murata (nobuto) wrote :

It was puzzling indeed, but now I have a reproduction step.

$ sudo apt update
-> no warning

$ sudo apt upgrade
-> to install something to invoke the rsyslog trigger.

Processing triggers for rsyslog (8.2312.0-3ubuntu3) ...
Warning: The unit file, source configuration file or drop-ins of rsyslog.service changed on disk. Run 'systemctl daemon-reload' to reload units.

$ sudo apt update
-> will see the warning.

The warning happens with every systemctl commands so it's not really ubuntu-pro-tools specific issue. However, systemctl warnings are not expected with `apt` commands usually so that's why this could be considered as a surprise. For fixing this properly, the place may not be in pro-tools itself but somewhere else.

Changed in ubuntu-advantage-tools (Ubuntu):
status: Incomplete → New
Revision history for this message
Alberto Contreras (aciba) wrote :

Hello Nobutu. Thanks again for reporting this.

I have been trying to reproduce the error with no success. I tried some combinations of:

- In lxd container with [jammy, noble]
- [pro downgrade / upgrade targeting 31.1]
- pro enable / disable
- [pro downgrade / upgrade targeting 31.1]
- apt updadate
- apt upgrade

Could you please provide more information about it?

Many thanks.

Changed in ubuntu-advantage-tools (Ubuntu):
status: New → Incomplete
Revision history for this message
Nobuto Murata (nobuto) wrote :

I tried to minimize the test case but no luck so far. I will report it back whenever I find something additional.

Revision history for this message
Paride Legovini (paride) wrote :

Interestingly this is now happening on my Noble system:

$ sudo apt update
Warning: The unit file, source configuration file or drop-ins of apt-news.service changed on disk. Run 'systemctl daemon-reload' to reload units.
Warning: The unit file, source configuration file or drop-ins of esm-cache.service changed on disk. Run 'systemctl daemon-reload' to reload units.

I'm quite sure I didn't manually touch those units.

I first noticed this yesterday 2024-03-03, where apparently nothing relevant happened wrt the u-a-t package. I have 31.1 installed, from the release pocket.

Revision history for this message
Paride Legovini (paride) wrote :

Now I remember one relevant thing that happened in the past 48h: I rebooted the affected system.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Whoever hits this please help us to spot the difference that happened as we still lack a reproducer.

# check if it has been changed
dpkg --verify ubuntu-advantage-tools

# check if there are drop ins that got added
systemctl cat apt-news.service

@nobotu - was yours really an empty file or did you not copy more than one?

Nobuto Murata (nobuto)
description: updated
Revision history for this message
Nobuto Murata (nobuto) wrote :

> @nobotu - was yours really an empty file or did you not copy more than one?

Are you referring to the `systemctl cat apt-news.service` in the bug description? If so, my apologies. I just pasted the file line of the content on purpose just for confirming the full path of the service. The flie wasn't empty at all and I didn't touch the file manually at all either.

Revision history for this message
Nobuto Murata (nobuto) wrote :
Download full text (3.2 KiB)

Just for completeness.

$ sudo apt update
Warning: The unit file, source configuration file or drop-ins of apt-news.service changed on disk. Run 'systemctl daemon-reload' to reload units.
Warning: The unit file, source configuration file or drop-ins of esm-cache.service changed on disk. Run 'systemctl daemon-reload' to reload units.
Hit:1 http://ftp.riken.jp/Linux/ubuntu noble InRelease
Hit:2 http://ftp.riken.jp/Linux/ubuntu noble-updates InRelease
Hit:3 http://ftp.riken.jp/Linux/ubuntu noble-backports InRelease
Hit:4 http://ftp.riken.jp/Linux/ubuntu noble-proposed InRelease
Hit:5 https://repo.steampowered.com/steam stable InRelease
Hit:6 https://packages.microsoft.com/repos/code stable InRelease
Hit:7 http://security.ubuntu.com/ubuntu noble-security InRelease
Get:8 https://pkgs.tailscale.com/stable/ubuntu noble InRelease
Fetched 6,563 B in 1s (6,699 B/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
67 packages can be upgraded. Run 'apt list --upgradable' to see them.

$ dpkg --verify ubuntu-advantage-tools; echo $?
0

$ apt policy ubuntu-advantage-tools
ubuntu-advantage-tools:
  Installed: 31.1
  Candidate: 31.1
  Version table:
     31.2 100
        100 http://ftp.riken.jp/Linux/ubuntu noble-proposed/main amd64 Packages
        100 http://ftp.riken.jp/Linux/ubuntu noble-proposed/main i386 Packages
 *** 31.1 500
        500 http://ftp.riken.jp/Linux/ubuntu noble/main amd64 Packages
        500 http://ftp.riken.jp/Linux/ubuntu noble/main i386 Packages
        100 /var/lib/dpkg/status

$ systemctl cat apt-news.service
# /usr/lib/systemd/system/apt-news.service
# APT News is hosted at https://motd.ubuntu.com/aptnews.json and can include
# timely information related to apt updates available to your system.
# This service runs in the background during an `apt update` to download the
# latest news and set it to appear in the output of the next `apt upgrade`.
# The script won't do anything if you've run: `pro config set apt_news=false`.
# The script will limit network requests to at most once per 24 hours.
# You can also host your own aptnews.json and configure your system to use it
# with the command:
# `pro config set apt_news_url=https://yourhostname/path/to/aptnews.json`

[Unit]
Description=Update APT News

[Service]
Type=oneshot
ExecStart=/usr/bin/python3 /usr/lib/ubuntu-advantage/apt_news.py
AppArmorProfile=ubuntu_pro_apt_news
CapabilityBoundingSet=~CAP_SYS_ADMIN
CapabilityBoundingSet=~CAP_NET_ADMIN
CapabilityBoundingSet=~CAP_NET_BIND_SERVICE
CapabilityBoundingSet=~CAP_SYS_PTRACE
CapabilityBoundingSet=~CAP_NET_RAW
PrivateTmp=true
RestrictAddressFamilies=~AF_NETLINK
RestrictAddressFamilies=~AF_PACKET
# These may break some tests, and should be enabled carefully
#NoNewPrivileges=true
#PrivateDevices=true
#ProtectControlGroups=true
# ProtectHome=true seems to reliably break the GH integration test with a lunar lxd on jammy host
#ProtectHome=true
#ProtectKernelModules=true
#ProtectKernelTunables=true
#ProtectSystem=full
#RestrictSUIDSGID=true
# Unsupported in bionic
# Suggestion from systemd.exec(5) manpage on SystemCallFilter
#SystemCallFilter=@system-service
#SystemCallFilter=~@mount
#SystemC...

Read more...

Revision history for this message
Nobuto Murata (nobuto) wrote :

The list of files modified in the last two hours (if I increase the range to the last 2 days, it lists almost everything).

$ find /etc/systemd /lib/systemd/ -mmin -7200
/etc/systemd/system
/etc/systemd/system/snap-chromium-2768.mount
/etc/systemd/system/snap-hugo-18726.mount
/etc/systemd/system/snap-juju-26548.mount
/etc/systemd/system/sshd-keygen@.service.d
/etc/systemd/system/snap-zoom\x2dclient-225.mount
/etc/systemd/system/snap-hugo-18753.mount
/etc/systemd/system/snap-juju-25751.mount
/etc/systemd/system/graphical.target.wants
/etc/systemd/system/multi-user.target.wants
/etc/systemd/system/multi-user.target.wants/snap-chromium-2768.mount
/etc/systemd/system/multi-user.target.wants/snap-hugo-18726.mount
/etc/systemd/system/multi-user.target.wants/snap-juju-26548.mount
/etc/systemd/system/multi-user.target.wants/snap-zoom\x2dclient-225.mount
/etc/systemd/system/multi-user.target.wants/snap-hugo-18753.mount
/etc/systemd/system/multi-user.target.wants/snap-juju-25751.mount
/etc/systemd/system/multi-user.target.wants/snap-hugo-18706.mount
/etc/systemd/system/snap.juju.fetch-oci.service
/etc/systemd/system/snap-hugo-18706.mount
/etc/systemd/system/snapd.mounts.target.wants
/etc/systemd/system/snapd.mounts.target.wants/snap-chromium-2768.mount
/etc/systemd/system/snapd.mounts.target.wants/snap-hugo-18726.mount
/etc/systemd/system/snapd.mounts.target.wants/snap-juju-26548.mount
/etc/systemd/system/snapd.mounts.target.wants/snap-zoom\x2dclient-225.mount
/etc/systemd/system/snapd.mounts.target.wants/snap-hugo-18753.mount
/etc/systemd/system/snapd.mounts.target.wants/snap-juju-25751.mount
/etc/systemd/system/snapd.mounts.target.wants/snap-hugo-18706.mount
/lib/systemd/system
/lib/systemd/system/tailscaled.service
/lib/systemd/system-generators

Revision history for this message
Nobuto Murata (nobuto) wrote (last edit ):

Hmm, it happened again between those two `apt update`. It might be snapd related.

2024-03-05T10:49:54.513356+09:00 t14 sudo: nobuto : TTY=pts/0 ; PWD=/home/nobuto ; USER=root ; COMMAND=/usr/bin/apt update
2024-03-05T11:00:47.422897+09:00 t14 sudo: nobuto : TTY=pts/0 ; PWD=/home/nobuto ; USER=root ; COMMAND=/usr/bin/apt update

$ uptime
 11:01:51 up 14 min, 1 user, load average: 0.91, 0.90, 0.75

$ find /etc/systemd /lib/systemd -mmin -15
/etc/systemd/system
/etc/systemd/system/snap-go-10535.mount
/etc/systemd/system/multi-user.target.wants
/etc/systemd/system/multi-user.target.wants/snap-go-10535.mount
/etc/systemd/system/snapd.mounts.target.wants
/etc/systemd/system/snapd.mounts.target.wants/snap-go-10535.mount

$ snap refresh --time
timer: 00:00~24:00/4
last: today at 10:53 JST
next: today at 17:07 JST

Revision history for this message
Grant Orndorff (orndorffgrant) wrote :

Thank you nobuto! With that I was able to reproduce the issue.

lxc launch ubuntu-daily:noble test
lxc exec test -- apt update # this one works as expected
lxc exec test -- snap install snapd
lxc exec test -- apt update # this one has the warnings in the bug report

assigning this bug to snapd

Revision history for this message
Haw Loeung (hloeung) wrote :

Seeing this myself:

| $ sudo apt-get update
| Warning: The unit file, source configuration file or drop-ins of apt-news.service changed on disk. Run 'systemctl daemon-reload' to reload units.
| Warning: The unit file, source configuration file or drop-ins of esm-cache.service changed on disk. Run 'systemctl daemon-reload' to reload units.

Revision history for this message
zeroc (zero-c) wrote :

I get the same warnings after editing 3 files /etc/apt/sources.list.d/

Warning: The unit file, source configuration file or drop-ins of apt-news.service changed on disk. Run 'systemctl daemon-reload' to reload units.
Warning: The unit file, source configuration file or drop-ins of esm-cache.service changed on disk. Run 'systemctl daemon-reload' to reload units.
OK:1 http://security.ubuntu.com/ubuntu noble-security InRelease
Holen:2 https://cli.github.com/packages stable InRelease [3.917 B]
OK:3 https://dl.google.com/linux/chrome/deb stable InRelease
OK:4 https://dl.winehq.org/wine-builds/ubuntu jammy InRelease
OK:5 http://archive.ubuntu.com/ubuntu noble InRelease
OK:6 https://ppa.launchpadcontent.net/oibaf/graphics-drivers/ubuntu noble InRelease
OK:7 https://download.vscodium.com/debs vscodium InRelease
OK:8 https://repo.steampowered.com/steam stable InRelease
Holen:9 https://ppa.launchpadcontent.net/savoury1/gimp/ubuntu jammy InRelease [18,1 kB]
OK:10 http://archive.ubuntu.com/ubuntu noble-updates InRelease
OK:11 http://archive.ubuntu.com/ubuntu noble-backports InRelease
OK:12 https://esm.ubuntu.com/apps/ubuntu noble-apps-security InRelease
OK:13 https://esm.ubuntu.com/apps/ubuntu noble-apps-updates InRelease
OK:14 https://esm.ubuntu.com/infra/ubuntu noble-infra-security InRelease
OK:15 https://esm.ubuntu.com/infra/ubuntu noble-infra-updates InRelease
Holen:16 https://ppa.launchpadcontent.net/savoury1/gimp/ubuntu jammy/main amd64 Packages [26,6 kB]
Holen:17 https://ppa.launchpadcontent.net/savoury1/gimp/ubuntu jammy/main i386 Packages [14,9 kB]
Holen:18 https://ppa.launchpadcontent.net/savoury1/gimp/ubuntu jammy/main Translation-en [14,4 kB]
Es wurden 77,8 kB in 2 s geholt (38,7 kB/s).

Zygmunt Krynicki (zyga)
Changed in snapd:
assignee: nobody → Zygmunt Krynicki (zyga)
Revision history for this message
Zygmunt Krynicki (zyga) wrote :

I've reproduced this and collected forkstat logs from installation of snapd snap on an otherwise pristine "noble" system. I think what is going on is that systemd stays in a mode where it knows that units on disk have changed vs units in memory and will print the warning until re-loaded. The fact that apt hooks fiddle with systemd units is sufficient for printing the warning:

apt update causes this thing to execute:

10:37:56 exec 3523 sh -c -- [ ! -e /run/systemd/system ] || [ $(id -u) -ne 0 ] || systemctl start --no-block apt-news.service esm-cache.service || true

This is enough for the warning.

The remaining question is where in the installation of snapd do we modify units after last daemon-reload. I'm focusing on that aspect now.

Revision history for this message
Zygmunt Krynicki (zyga) wrote :

Snapd touches neither apt-news.service nor esm-cache.service.

On my system the only mention of esm-cache.service is in uaclient/actions.py:

zyga@ciri:/$ grep -FR esm-cache.service usr/ 2>/dev/null
usr/lib/python3/dist-packages/uaclient/actions.py: "esm-cache.service",

I've increased systemd logging to debug to see what is replacing the service but I cannot find any evidence of that in the logs.

Revision history for this message
Zygmunt Krynicki (zyga) wrote :

Removing ubuntu-pro-client silences this, so that installation of snapd snap no longer causes any side-effects. While I can see that installation of snapd has some impact on ubuntu-pro-client, I cannot yet understand how.

Revision history for this message
Andreas Hasenack (ahasenack) wrote :

Check the postinst script of the binary packages produced by src:ubuntu-advantage-tools. The binary packages that install systemd units must call daemon-reload at some point after the new unit file was installed.

Revision history for this message
Nobuto Murata (nobuto) wrote :

It's not the apt-news nor esm-cache service that was modified.

It looks like systemd warns about daemon-reload in any cases if any of the systemd unit files are modified and daemon-reload wasn't called after that.
https://bugs.launchpad.net/ubuntu/+source/ubuntu-advantage-tools/+bug/2055239/comments/12

Revision history for this message
Andreas Hasenack (ahasenack) wrote :

> It's not the apt-news nor esm-cache service that was modified.

> It looks like systemd warns about daemon-reload in any cases if any of the systemd unit files are
> modified and daemon-reload wasn't called after that.

I understand, but in comment #14 the warning is very specific about the unit files that changed: apt-news.service and esm-cache.service

Could it be that something else installed an override config for those units elsewhere (/run, or /etc), and then didn't issue the daemon-reload?

Could we get an "systemctl cat apt-news.service esm-cache.service" output after this warning? It will say which files exactly are being considered, if it's just /lib/systemd/system/{apt-news,esm-cache}.service or other config snippets.

Revision history for this message
Zygmunt Krynicki (zyga) wrote :

With a closer look I ended up running this loop while looking at systemd debug logs:

sudo snap remove --purge snapd && sudo systemctl daemon-reload && sudo systemctl restart snapd && snap version && sudo apt update && echo "ALOHA: installing snapd" | systemd-cat && sudo snap install snapd && echo "ALOHA: done installing snapd" | systemd-cat

This causes the following log file to show up:

mar 13 13:02:52 ciri systemd[1]: Looking for unit files in (higher priority first):
mar 13 13:02:52 ciri systemd[1]: /etc/systemd/system.control
mar 13 13:02:52 ciri systemd[1]: /run/systemd/system.control
mar 13 13:02:52 ciri systemd[1]: /run/systemd/transient
mar 13 13:02:52 ciri systemd[1]: /run/systemd/generator.early
mar 13 13:02:52 ciri systemd[1]: /etc/systemd/system
mar 13 13:02:52 ciri systemd[1]: /etc/systemd/system.attached
mar 13 13:02:52 ciri systemd[1]: /run/systemd/system
mar 13 13:02:52 ciri systemd[1]: /run/systemd/system.attached
mar 13 13:02:52 ciri systemd[1]: /run/systemd/generator
mar 13 13:02:52 ciri systemd[1]: /usr/local/lib/systemd/system
mar 13 13:02:52 ciri systemd[1]: /usr/lib/systemd/system
mar 13 13:02:52 ciri systemd[1]: /run/systemd/generator.late
mar 13 13:02:52 ciri systemd[1]: Modification times have changed, need to update cache.

The message at the bottom of the log comes from systemd's src/basic/unit-file.c

bool lookup_paths_timestamp_hash_same(const LookupPaths *lp, uint64_t timestamp_hash, uint64_t *ret_new) {
        struct siphash state;

        siphash24_init(&state, HASH_KEY.bytes);

        STRV_FOREACH(dir, lp->search_path) {
                struct stat st;

                if (lookup_paths_mtime_exclude(lp, *dir))
                        continue;

                /* Determine the latest lookup path modification time */
                if (stat(*dir, &st) < 0) {
                        if (errno == ENOENT)
                                continue;

                        log_debug_errno(errno, "Failed to stat %s, ignoring: %m", *dir);
                        continue;
                }

                siphash24_compress_usec_t(timespec_load(&st.st_mtim), &state);
        }

        uint64_t updated = siphash24_finalize(&state);
        if (ret_new)
                *ret_new = updated;
        if (updated != timestamp_hash)
                log_debug("Modification times have changed, need to update cache.");
        return updated == timestamp_hash;
}

Modification of mtime of any of the directories above is sufficient to cause this to differ.

I've patched systemd to tell us why systemd thinks it needs to be reloaded (additional printfs) to get an idea what might be the trigger that is left stale.

Revision history for this message
Zygmunt Krynicki (zyga) wrote :

Both before and after daemon-reload the units have the same definition:

$ systemctl cat apt-news.service esm-cache.service
# /usr/lib/systemd/system/apt-news.service
# APT News is hosted at https://motd.ubuntu.com/aptnews.json and can include
# timely information related to apt updates available to your system.
# This service runs in the background during an `apt update` to download the
# latest news and set it to appear in the output of the next `apt upgrade`.
# The script won't do anything if you've run: `pro config set apt_news=false`.
# The script will limit network requests to at most once per 24 hours.
# You can also host your own aptnews.json and configure your system to use it
# with the command:
# `pro config set apt_news_url=https://yourhostname/path/to/aptnews.json`

[Unit]
Description=Update APT News

[Service]
Type=oneshot
ExecStart=/usr/bin/python3 /usr/lib/ubuntu-advantage/apt_news.py
AppArmorProfile=ubuntu_pro_apt_news
CapabilityBoundingSet=~CAP_SYS_ADMIN
CapabilityBoundingSet=~CAP_NET_ADMIN
CapabilityBoundingSet=~CAP_NET_BIND_SERVICE
CapabilityBoundingSet=~CAP_SYS_PTRACE
CapabilityBoundingSet=~CAP_NET_RAW
PrivateTmp=true
RestrictAddressFamilies=~AF_NETLINK
RestrictAddressFamilies=~AF_PACKET
# These may break some tests, and should be enabled carefully
#NoNewPrivileges=true
#PrivateDevices=true
#ProtectControlGroups=true
# ProtectHome=true seems to reliably break the GH integration test with a lunar lxd on jammy host
#ProtectHome=true
#ProtectKernelModules=true
#ProtectKernelTunables=true
#ProtectSystem=full
#RestrictSUIDSGID=true
# Unsupported in bionic
# Suggestion from systemd.exec(5) manpage on SystemCallFilter
#SystemCallFilter=@system-service
#SystemCallFilter=~@mount
#SystemCallErrorNumber=EPERM
#ProtectClock=true
#ProtectKernelLogs=true

# /usr/lib/systemd/system/esm-cache.service
# The ESM apt cache will maintain information about what ESM updates are
# available to a system. This information will be presented to users in the apt
# output, or when running pro security-status. These caches are maintained
# entirely outside the system apt configuration to avoid interference with user
# definitions. This service updates those caches. This will only have effect
# on releases where ESM is applicable, starting from Xenial: esm-apps for
# every LTS, and esm-infra for systems in expanded support period after the LTS
# expires.

[Unit]
Description=Update the local ESM caches

[Service]
Type=oneshot
ExecStart=/usr/bin/python3 /usr/lib/ubuntu-advantage/esm_cache.py

Revision history for this message
Grant Orndorff (orndorffgrant) wrote :

Thanks for all the investigation and discussion!

Just to close out the ubuntu-pro-client related questions:
ubuntu-pro-client does run daemon-reload in postinst.
and here is a reproducer that doesn't involve ubuntu-pro-client services

```
lxc launch ubuntu-daily:noble test
lxc shell test
# now in the noble container
cat > /usr/lib/systemd/system/hello.service << EOF
[Unit]
Description=Hello

[Service]
Type=oneshot
ExecStart=echo hello
EOF
systemctl start hello
systemctl status hello
snap install snapd
systemctl start hello # this will show the warning
systemctl cat hello.service # no noticeable change
```

So I'll mark this invalid for u-a-t.

This also demonstrates that a totally new systemd service is affected. Does snapd iterate over all systemd units to check something? Then maybe it is accidentally updating mtime even though it doesn't change contents?

Changed in ubuntu-advantage-tools (Ubuntu):
status: Incomplete → Invalid
Revision history for this message
Zygmunt Krynicki (zyga) wrote :

Yes, I think we may be enumerating a directory / statting files. I don't believe we open anything unless we want to have a look but I _could_ be wrong and I'm still investigating things (with interruptions to attend calls).

I don't believe it is related to ubuntu-pro-client, the only reason it is in the report is that "apt update" hook calls into systemctl so the warning is printed there.

Revision history for this message
Heinrich Schuchardt (xypron) wrote :

Today I saw the warning on an riscv64 Ubuntu 24.04 system booted from https://cdimage.ubuntu.com/ubuntu-server/daily-preinstalled/pending/noble-preinstalled-server-riscv64+icicle.img.xz after executing apt-get update.

Revision history for this message
Eccentric Orange (eccentricorange) wrote (last edit ):

I am seeing this message on Ubuntu 24.04 Beta x64. I did an apt update and apt upgrade yesterday without any issues, and got this message out of the blue today.

Sorry, I might not have been able to follow this entire discussion, but if you need any logs/info from me and can guide me on providing them, I'll happily oblige.

**Edit:** Reboot fixed it

Revision history for this message
Islam (islam) wrote (last edit ):

Same thing on 24.04 and rebooting doesn't fix it.

Seems those unit files belongs to: ubuntu-pro-client

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :
Download full text (10.2 KiB)

TL;DR:

As many I've gone deeper, but none of the times in `stat` nor the
checksums of /usr/lib/systemd/system/hello.service did change.

Turns out this wasn't even about their file states.
And additionally my understanding was wrong, and potentially yours as well.

The state if this is outdated is not only per service via `struct UnitStatusInfo`,
but also globally across all services via `struct Manager`.

Snapd's way to enable its mount unit sets that global state and
therefore either needs to change how it enables units or run
a daemon-reload afterwards just like most .deb package installs do.

Details:

First we need to be careful, there are two ambiguous paths here that
can trigger the same message:

a) on service start
  start_unit_one
    -> if (need_daemon_reload(bus, name) > 0)
      -> warn_unit_file_changed(name);
  This is a function:
    int need_daemon_reload(sd_bus *bus, const char *unit)
  It will call out via dbus asking for the attribute NeedDaemonReload

b) on service status
  print_status_info
   -> if (i->need_daemon_reload)
      -> warn_unit_file_changed(i->id);

Also for storage, there are two:

c) `struct Manager` containing `unit_file_state_outdated` which is a global
  state for all units in that manager

d) Each `struct UnitStatusInfo` has a field `need_daemon_reload` (yes just
  named like the function above) that can flag this per unit.

And (a) isn't even per service.

The value of that can be fetched per service via dbus like:
$ dbus-send --system --print-reply --dest="org.freedesktop.systemd1" "/org/freedesktop/systemd1/unit/hello_2eservice" "org.freedesktop.DBus.Properties.Get" "string:org.freedesktop.systemd1.Unit" "string:NeedDaemonReload"
method return time=1713787033.411485 sender=:1.3 -> destination=:1.19 serial=3833 reply_serial=2
   variant boolean false

And the same can be fetched via `systemctl show` as well:

root@test:~# systemctl show hello | grep '^NeedDaemonReload'
NeedDaemonReload=yes

With the above in mind we can see that installing snapd renders ALL of them
as outdated. It was spotted with pro, reproduced with a simple example
and if you check the system it is all of them.

root@test:~# for u in $(systemctl list-units --output json | jq '.[].unit' | tr -d '"'); do systemctl show $u | grep '^NeedDaemonReload'; done 2>/dev/null | uniq -c
    143 NeedDaemonReload=no
root@test:~# snap install snapd
2024-04-22T12:44:50Z INFO Waiting for automatic snapd restart...
snapd 2.62 from Canonical✓ installed
root@test:~# for u in $(systemctl list-units --output json | jq '.[].unit' | tr -d '"'); do systemctl show $u | grep '^NeedDaemonReload'; done 2>/dev/null | uniq -c
    144 NeedDaemonReload=yes

Still the question is, which of the two data points is it switching?
It could be the global setting, but as well iterating and setting it per service.

I found that the global state could get changed in src/core/dbus-manager.c
in very similarly named methods:
- method_add_dependency_unit_files
- method_preset_all_unit_files
- method_revert_unit_files
- method_disable_unit_files_generic
- method_preset_unit_files_with_mode
- method_enable_unit_files_generic

All of them do eventually the same:
  m->un...

Revision history for this message
J (picea-sitchensis) wrote :

Hello. I'm also running into this problem:

sudo apt update && sudo apt upgrade
Warning: The unit file, source configuration file or drop-ins of apt-news.service changed on disk. Run 'systemctl daemon-reload' to reload units.
Warning: The unit file, source configuration file or drop-ins of esm-cache.service changed on disk. Run 'systemctl daemon-reload' to reload units.
Hit:1 http://us.archive.ubuntu.com/ubuntu noble InRelease
Hit:2 http://us.archive.ubuntu.com/ubuntu noble-updates InRelease
Hit:3 http://security.ubuntu.com/ubuntu noble-security InRelease
Hit:4 http://us.archive.ubuntu.com/ubuntu noble-backports InRelease
Hit:5 https://linux.teamviewer.com/deb stable InRelease
Hit:6 https://esm.ubuntu.com/apps/ubuntu noble-apps-security InRelease
Hit:7 https://esm.ubuntu.com/infra/ubuntu noble-infra-security InRelease
Ign:8 https://repository.mullvad.net/deb/stable noble InRelease
Hit:9 https://ppa.launchpadcontent.net/unit193/encryption/ubuntu noble InRelease
Err:10 https://repository.mullvad.net/deb/stable noble Release
  404 Not Found [IP: 45.149.104.1 443]
Reading package lists... Done
W: https://ppa.launchpadcontent.net/unit193/encryption/ubuntu/dists/noble/InRelease: Signature by key 3BFB8E06536B8753AC58A4A303647209B58A653A uses weak algorithm (rsa1024)
E: The repository 'https://repository.mullvad.net/deb/stable noble Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

I have no idea what it even means. Any suggestions? Should I even be concerned about this?

Revision history for this message
Doug Fisherman (cylinder) wrote (last edit ):

Fixed
Fixed
Fixed
Fixed after reboot.
Did reboot work for any others?

Happened to me after clearing out games,then apt purge hexchat.

Machine is all enterprise gear. Dell poweredge r730xd
my cli error is:

sudo apt update
[sudo] password for xxxxxx:
Sorry, try again.
[sudo] password for xxxxxx:
Warning: The unit file, source configuration file or drop-ins of apt-news.service changed on disk. Run 'systemctl daemon-reload' to reload units.
Warning: The unit file, source configuration file or drop-ins of esm-cache.service changed on disk. Run 'systemctl daemon-reload' to reload units.
Hit:1 http://security.ubuntu.com/ubuntu noble-security InRelease
Hit:2 http://ca.archive.ubuntu.com/ubuntu noble InRelease
Get:3 http://ca.archive.ubuntu.com/ubuntu noble-updates InRelease [89.7 kB]
Hit:4 http://ca.archive.ubuntu.com/ubuntu noble-backports InRelease
Fetched 89.7 kB in 1s (113 kB/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
All packages are up to date.

Revision history for this message
Grant Orndorff (orndorffgrant) wrote :

It looks like the global need-reload state that Christian investigated that is being set by a snapd operation was added recently in systemd.

https://github.com/systemd/systemd/commit/a82b8b3dc80619c3275ad8180069289b411206d0

That is likely why we're only seeing this issue in noble.

From reading the commit message there, it sounds like the right thing to do is for snapd to issue a daemon-reload after it sets up all its units.

Revision history for this message
Paul White (paulw2u) wrote (last edit ):

I'm sorry if this comment isn't helpful but I'm only seeing this on a new noble installation but never on another installation which was upgraded from mantic some time ago but early on during the noble development period.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.