snapd disk write usage very high on 19.04 with snapd 2.39.3

Bug #1839237 reported by LI HE
68
This bug affects 13 people
Affects Status Importance Assigned to Milestone
snapd (Ubuntu)
Confirmed
Undecided
Unassigned

Bug Description

release: 19.04
snapd:
  Installed: 2.39.2+19.04
  Candidate: 2.39.2+19.04
  Version table:
 *** 2.39.2+19.04 500
        500 http://mirrors.yun-idc.com/ubuntu disco-updates/main amd64 Packages
        100 /var/lib/dpkg/status
     2.38+19.04 500
        500 http://mirrors.yun-idc.com/ubuntu disco/main amd64 Packages

Snapd will continue to occupy writes, the speed reaches 5M / s, but the read is 0, I can only turn off the snapd service after each system startup to ensure that my hardware is not damaged, the disk is nvme protocol ssd

Revision history for this message
Zygmunt Krynicki (zyga) wrote :

Hello

Can you please help us understand the issue. When you are observing disk IO what is the output of "snap changes"?

Changed in snapd (Ubuntu):
status: New → Incomplete
Revision history for this message
LI HE (lihe2096) wrote :

Thank you for responding to my question, I ran this command and got the following output

44 Doing 3 days ago, at 09:44 CST - Auto-refresh snap "redis-desktop-manager"

Revision history for this message
LI HE (lihe2096) wrote :

I don't know if there is any problem here, but I checked the contents of the task and interrupted the task. The tasks are as follows:

lihe@system:~$ snap tasks 44
Status Spawn Ready Summary
Done 3 days ago, at 09:44 CST 3 days ago, at 09:44 CST Ensure prerequisites for "redis-desktop-manager" are available
Doing 3 days ago, at 09:44 CST - Download snap "redis-desktop-manager" (327) from channel "stable" (79.22%)
Do 3 days ago, at 09:44 CST - Fetch and check assertions for snap "redis-desktop-manager" (327)
Do 3 days ago, at 09:44 CST - Mount snap "redis-desktop-manager" (327)
Do 3 days ago, at 09:44 CST - Run pre-refresh hook of "redis-desktop-manager" snap if present
Do 3 days ago, at 09:44 CST - Stop snap "redis-desktop-manager" services
Do 3 days ago, at 09:44 CST - Remove aliases for snap "redis-desktop-manager"
Do 3 days ago, at 09:44 CST - Make current revision for snap "redis-desktop-manager" unavailable
Do 3 days ago, at 09:44 CST - Copy snap "redis-desktop-manager" data
Do 3 days ago, at 09:44 CST - Setup snap "redis-desktop-manager" (327) security profiles
Do 3 days ago, at 09:44 CST - Make snap "redis-desktop-manager" (327) available to the system
Do 3 days ago, at 09:44 CST - Automatically connect eligible plugs and slots of snap "redis-desktop-manager"
Do 3 days ago, at 09:44 CST - Set automatic aliases for snap "redis-desktop-manager"
Do 3 days ago, at 09:44 CST - Setup snap "redis-desktop-manager" aliases
Do 3 days ago, at 09:44 CST - Run post-refresh hook of "redis-desktop-manager" snap if present
Do 3 days ago, at 09:44 CST - Start snap "redis-desktop-manager" (327) services
Do 3 days ago, at 09:44 CST - Clean up "redis-desktop-manager" (327) install
Do 3 days ago, at 09:44 CST - Run configure hook of "redis-desktop-manager" snap if present
Doing 3 days ago, at 09:44 CST - Consider re-refresh of "redis-desktop-manager"

After the termination, the system is normal.

Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for snapd (Ubuntu) because there has been no activity for 60 days.]

Changed in snapd (Ubuntu):
status: Incomplete → Expired
Revision history for this message
sfc (sfc-0) wrote :

This still happens to me on 19.10 when snapd performing auto refresh.

snapd version: 2.41+19.10.1

Revision history for this message
Gao Shichao (xgdgsc) wrote :

I see very high (constant disk LED on) on ubuntu 18.04 also. Have to disable it since I never used it here in China, where snap download speed is unusable.

Changed in snapd (Ubuntu):
status: Expired → Confirmed
Revision history for this message
Kaijia Feng (fengkaijia) wrote :

Exactly the same issue, also inside China (mainland). When snapd using ~5M/s of write, `snap changes` is:

ID Status Spawn Ready Summary
347 Doing today at 16:37 HKT - Auto-refresh snaps "<other snap>", "wine-platform-runtime"

Disk write is done once the `snap changes` turns to:

ID Status Spawn Ready Summary
347 Error today at 16:37 HKT today at 17:03 HKT Auto-refresh snaps "<other snap>", "wine-platform-runtime"

Syslog b/w 16:37 and 17:03:

Jan 30 16:37:02 HOSTNAME snapd[1549]: storehelpers.go:436: cannot refresh:
Jan 30 16:37:02 HOSTNAME snapd[1549]: snap has no updates available: "<other snap 1>", "<other snap 2>", "<other snaps....>"
Jan 30 16:37:02 HOSTNAME snapd[1549]: snap not found: "wine-platform"
Jan 30 17:03:00 HOSTNAME snapd[1549]: storehelpers.go:436: cannot refresh snap "wine-platform-runtime": snap has no updates available

Revision history for this message
Kaijia Feng (fengkaijia) wrote :

Typo in #7: Disk write is *gone*, aka. 0M/s IO

Revision history for this message
Kaijia Feng (fengkaijia) wrote :

Can confirm this issue happens every time when snap is auto-refreshing a snap (possibly have to refresh multiple snap at the same time to trigger the issue) in background. If you show tasks of the current "Doing" changes, you will see it's downloading something, e.g.: `snap tasks 348`

.....
Done today at 20:50 HKT today at 20:50 HKT Ensure prerequisites for "slack" are available
Doing today at 20:50 HKT - Download snap "slack" (21) from channel "stable" (99.85%)
Do today at 20:50 HKT - Fetch and check assertions for snap "slack" (21)
.....
Done today at 20:50 HKT today at 20:53 HKT Ensure prerequisites for "<other snap>" are available
Doing today at 20:50 HKT - Download snap "<other snap>" (1038) from channel "stable" (12.17%)
.....

There will be around 5MB/s of IO when snapd is downloading something, whatever the download speed is. As the connection to Canonical's server in China is poor, people can fell this IO lasting for a long time.

I guess now the question is, why there's this 5MB/s IO when snapd is refreshing a snap?

Revision history for this message
Kaijia Feng (fengkaijia) wrote :

Hi Zygmunt, can you have a look with my added info? @zyga

Revision history for this message
John Lenton (chipaca) wrote :

What might be happening (would need DEBUG logs to know for sure) is that the download gets interrupted, so snapd rehashes (-> reads) what's been downloaded so far before continuing.

Revision history for this message
sfc (sfc-0) wrote :

This still happens on 20.04, could anyone tell me how to get some debug logs?

Revision history for this message
Gao Shichao (xgdgsc) wrote :

https://news.ycombinator.com/item?id=22972661 Doesn' t matter anymore as people are removing this shit.

Revision history for this message
John (john-mg-allen) wrote :

This issue has occurred on my clean 20.04 install. Completely thrashing my SSD for about 40mins, stopping the service would just have it restart a few seconds later. "journalctl --system -f -u snapd" didn't reveal anything of interest. Restarting the OS didn't solve the issue. I've had to disable snapd just to stop my SSD from dying.

Revision history for this message
Maciej Borzecki (maciek-borzecki) wrote :

Unfortunately I cannot reproduce this behavior here.

There is obviously a measurable IO when snap is refreshed, but that is expected, since a new revision needs to be downloaded (writes), downloads could be retried (read to hash the files and then writes), followed by the bookkeeping tasks (read & write). More snaps will obviously lead to the higher IO though a longer time. During the downloads, one can easily naturally expect the IO write to be as much as the download speed, since the data has to end up on disk at one point.

Would you mind helping us debugging the problem? I think it would be useful to collect the output of: `sudo pidstat -d -T ALL -p `pidof snapd` 5` and `snap changes`.

In my system I get the following stats during idle:

11:12:04 UID PID kB_rd/s kB_wr/s kB_ccwr/s iodelay Command
11:12:05 0 4032 0.00 0.00 0.00 0 snapd

Then when for instance installing lxd:

>>>>> download still in progress
11:13:48 UID PID kB_rd/s kB_wr/s kB_ccwr/s iodelay Command
11:13:53 0 4032 0.00 1043.20 0.00 0 snapd

11:13:53 UID PID kB_rd/s kB_wr/s kB_ccwr/s iodelay Command
11:13:58 0 4032 5.60 925.60 0.00 0 snapd

>>>>> download done, started bookkeeping tasks
11:13:58 UID PID kB_rd/s kB_wr/s kB_ccwr/s iodelay Command
11:14:03 0 4032 1048.80 1677.60 138.40 0 snapd

>>>>> all done, back to idle
11:14:03 UID PID kB_rd/s kB_wr/s kB_ccwr/s iodelay Command
11:14:08 0 4032 0.00 0.00 0.00 0 snapd

Changed in snapd (Ubuntu):
status: Confirmed → Incomplete
Revision history for this message
John (john-mg-allen) wrote :

Maciej I've re-enabled the snapd service and will report bag with the results of those debug commands you've suggested when it starts heavy disk usage again.

I'll add I've never seen such long, sustained and intense disk access/thrashing apart from when doing large drive to drive backups. I only have 3 or 4 extra snaps installed over the default which comes with ubuntu 20.04 and my slow internet connection isn't capable of downloading data fast enough to cause my SSD to go into overdrive.

Revision history for this message
Colin Ian King (colin-king) wrote :

One can see the file activity in more detail using:

sudo apt install fnotifystat
sudo fnotifystat

That will show exactly what is going on

Revision history for this message
John (john-mg-allen) wrote :

Since disabling snapd service and re-enabling it a day later I've been unable to reproduce the issue.

Revision history for this message
sfc (sfc-0) wrote :

fnotifystat shows during snap refreshing chromium:

Total Open Close Read Write PID Process Pathname
 53.2 15.7 15.7 21.9 0.0 1423 snapd /var/lib/snapd/assertions/asserts-v0/serial/generic/generic-classic/3e9f8cfc-6448-4ee5-a063-e6dca36cbd7b
 48.5 15.7 15.7 17.2 0.0 1423 snapd /var/lib/snapd/assertions/asserts-v0/model/16/generic/generic-classic
 47.0 15.7 15.7 15.7 0.0 1423 snapd /var/lib/snapd/assertions/asserts-v0/model/16/generic/generic-classic/active
 47.0 15.7 15.7 15.7 0.0 1423 snapd /var/lib/snapd/assertions/asserts-v0/serial/generic/generic-classic/3e9f8cfc-6448-4ee5-a063-e6dca36cbd7b/active
 37.6 18.8 18.8 0.0 0.0 1423 snapd /var/lib/snapd
  6.3 1.6 1.6 0.0 3.1 1423 snapd /var/lib/snapd/state.json.3JyPRpSX9Wny~
  6.3 1.6 1.6 0.0 3.1 1423 snapd /var/lib/snapd/state.json.6nB3FqVxvjFb~
  6.3 1.6 1.6 0.0 3.1 1423 snapd /var/lib/snapd/state.json.D1LhKm5mlyGy~
  6.3 1.6 1.6 0.0 3.1 1423 snapd /var/lib/snapd/state.json.Dq4TVRJVvRG1~
  6.3 1.6 1.6 0.0 3.1 1423 snapd /var/lib/snapd/state.json.FMNtFS8YYlng~
  6.3 1.6 1.6 0.0 3.1 1423 snapd /var/lib/snapd/state.json.hGy5jX2syq49~
  6.3 1.6 1.6 0.0 3.1 1423 snapd /var/lib/snapd/state.json.j8Xyb0TqmhmH~
  6.3 1.6 1.6 0.0 3.1 1423 snapd /var/lib/snapd/state.json.qVC5sVR5pfg3~
  6.3 1.6 1.6 0.0 3.1 1423 snapd /var/lib/snapd/state.json.r4v1vtQDQxX6~
  6.3 1.6 1.6 0.0 3.1 1423 snapd /var/lib/snapd/state.json.rKJwFbvPDkW4~
  6.3 1.6 1.6 0.0 3.1 1423 snapd /var/lib/snapd/state.json.xS543gv26pD2~
  6.3 1.6 1.6 0.0 3.1 1423 snapd /var/lib/snapd/state.json.y8GgXRcY3xmk~
  1.6 0.0 0.0 0.0 1.6 1423 snapd /var/lib/snapd/snaps/chromium_1165.snap.xdelta3-1143-to-1165.partial

Revision history for this message
Travis Downs (travis-downs) wrote :

I am suffering the same issue. snapd writing at 2 MB/s for almost 5 straight days. I have poor internet connectivity. Most of the time network monitor shows zero network activity, so snap is not "actually" downloading anything.

$snap changes
ID Status Spawn Ready Summary
54 Doing today at 10:35 -04 - Auto-refresh snap "chromium"

$ sudo fnotifystat
Total Open Close Read Write PID Process Pathname
 55.0 18.0 18.0 19.0 0.0 13786 snapd /var/lib/snapd/assertions/asserts-v0/model/16/generic/generic-classic
 54.0 18.0 18.0 18.0 0.0 13786 snapd /var/lib/snapd/assertions/asserts-v0/model/16/generic/generic-classic/active
 54.0 18.0 18.0 18.0 0.0 13786 snapd /var/lib/snapd/assertions/asserts-v0/serial/generic/generic-classic/d4eca5b4-8c2b-4ec1-a6b2-4ed7a8e76fbe
 54.0 18.0 18.0 18.0 0.0 13786 snapd /var/lib/snapd/assertions/asserts-v0/serial/generic/generic-classic/d4eca5b4-8c2b-4ec1-a6b2-4ed7a8e76fbe/active
 36.0 18.0 18.0 0.0 0.0 13786 snapd /var/lib/snapd
 26.0 0.0 13.0 0.0 13.0 13786 snapd /var/lib/snapd/state.json
  4.0 1.0 1.0 0.0 2.0 13786 snapd /var/lib/snapd/state.json.8s9NlHqn9hgk~
  4.0 1.0 1.0 0.0 2.0 13786 snapd /var/lib/snapd/state.json.9K7JVycntjF2~
  4.0 1.0 1.0 0.0 2.0 13786 snapd /var/lib/snapd/state.json.DQF0MS3WKj05~
  4.0 1.0 1.0 0.0 2.0 13786 snapd /var/lib/snapd/state.json.Llsbh5NnGHgK~
  4.0 1.0 1.0 0.0 2.0 13786 snapd /var/lib/snapd/state.json.hkfJT9N0htBG~
  2.0 1.0 0.0 0.0 1.0 13786 snapd /var/lib/snapd/state.json.4XgMb38QNtrW~
  2.0 1.0 0.0 0.0 1.0 13786 snapd /var/lib/snapd/state.json.6vHSRf3xbRS5~
  2.0 1.0 0.0 0.0 1.0 13786 snapd /var/lib/snapd/state.json.FwlhlyDlFXLp~
  2.0 1.0 0.0 0.0 1.0 13786 snapd /var/lib/snapd/state.json.G64fgNkBmr2x~
  2.0 1.0 0.0 0.0 1.0 13786 snapd /var/lib/snapd/state.json.Kp061JNgygDv~
  2.0 1.0 0.0 0.0 1.0 13786 snapd /var/lib/snapd/state.json.PQ5byShcJ5Ym~
  2.0 1.0 0.0 0.0 1.0 13786 snapd /var/lib/snapd/state.json.Q6PC97T8x9rm~
  2.0 1.0 0.0 0.0 1.0 13786 snapd /var/lib/snapd/state.json.RWkVcKbtsGw4~
  2.0 1.0 0.0 0.0 1.0 13786 snapd /var/lib/snapd/state.json.Wr5BSmHwg0k1~
  2.0 1.0 0.0 0.0 1.0 13786 snapd /var/lib/snapd/state.json.brJjjjMnyrmB~
  2.0 1.0 0.0 0.0 1.0 13786 snapd /var/lib/snapd/state.json.cbv2dSN9YH8r~
  2.0 1.0 0.0 0.0 1.0 13786 snapd /var/lib/snapd/state.json.pPjJXh9CQRG6~
  2.0 1.0 0.0 0.0 1.0 13786 snapd /var/lib/snapd/state.json.rwSwHlQ95nv7~

This is chewing up my SSD and my last SSD failed at only 10% write life used, so it is a real concern for me.

Revision history for this message
sfc (sfc-0) wrote :

sudo pidstat -d -T ALL -p `pidof snapd` 5

18时51分41秒 UID PID kB_rd/s kB_wr/s kB_ccwr/s iodelay Command
18时51分46秒 0 4326 0.00 7019.20 0.00 0 snapd

18时51分46秒 UID PID kB_rd/s kB_wr/s kB_ccwr/s iodelay Command
18时51分51秒 0 4326 0.00 7254.40 0.00 0 snapd

18时51分51秒 UID PID kB_rd/s kB_wr/s kB_ccwr/s iodelay Command
18时51分56秒 0 4326 0.00 7230.40 0.00 0 snapd

18时51分56秒 UID PID kB_rd/s kB_wr/s kB_ccwr/s iodelay Command
18时52分01秒 0 4326 0.00 7120.00 0.00 0 snapd

18时52分01秒 UID PID kB_rd/s kB_wr/s kB_ccwr/s iodelay Command
18时52分06秒 0 4326 0.00 7016.80 0.00 0 snapd
^C

平均时间: UID PID kB_rd/s kB_wr/s kB_ccwr/s iodelay Command
平均时间: 0 4326 0.00 7128.16 0.00 0 snapd

snap changes

ID Status Spawn Ready Summary
91 Done today at 17:14 CST today at 17:14 CST Connect chromium:password-manager-service to core:password-manager-service
92 Error today at 17:19 CST today at 17:56 CST Refresh snaps "gtk-common-themes", "core", "core18", "code", "intellij-idea-community", "gnome-3-26-1604", "gnome-3-28-1804", "chromium"
93 Error today at 18:00 CST today at 18:06 CST Refresh snaps "gtk-common-themes", "core", "code", "intellij-idea-community", "gnome-3-26-1604", "gnome-3-28-1804", "chromium"
94 Error today at 18:06 CST today at 18:08 CST Refresh snaps "core", "code", "intellij-idea-community", "gnome-3-26-1604", "gnome-3-28-1804", "chromium", "gtk-common-themes"
95 Doing today at 18:09 CST - Refresh snaps "core", "code", "intellij-idea-community", "gnome-3-26-1604", "gnome-3-28-1804", "chromium", "gtk-common-themes"

snapd shows a ~7MB/s disk writes while the download is very slow (only ~50KB/s on nethogs).

Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for snapd (Ubuntu) because there has been no activity for 60 days.]

Changed in snapd (Ubuntu):
status: Incomplete → Expired
Revision history for this message
bl (blagopoluchie) wrote :
Download full text (3.5 KiB)

The problem is still ongoing on 20.04. I have to open the console daily and run the following commands, due to snapd starts to use my SSD non-stop when I restart. This is faking as `updating Chromium`.

sudo kill -9 $ (pgrep -fn snapd)
sudo systemctl stop snapd
sudo systemctl stop snapd.socket
sudo systemctl stop snapd.service

snap changes

ID Status Spawn Ready Summary
76 Doing 2 days ago, at 14:56 MSK - Автоматически обновить "chromium" пакет

sudo pidstat -d -T ALL -p `pidof snapd` 5

18:58:15 UID PID kB_rd/s kB_wr/s kB_ccwr/s iodelay Command
18:58:20 0 1108 0,00 4024,00 0,00 0 snapd

sudo fnotifystat

Total Open Close Read Write PID Process Pathname
 57.0 18.0 18.0 21.0 0.0 1108 snapd /var/lib/snapd/assertions/asserts-v0/model/16/generic/generic-classic
 55.0 18.0 18.0 19.0 0.0 1108 snapd /var/lib/snapd/assertions/asserts-v0/serial/generic/generic-classic/bd2c60ac-7492-417f-ad47-f1fbcf7e8656
 54.0 18.0 18.0 18.0 0.0 1108 snapd /var/lib/snapd/assertions/asserts-v0/model/16/generic/generic-classic/active
 54.0 18.0 18.0 18.0 0.0 1108 snapd /var/lib/snapd/assertions/asserts-v0/serial/generic/generic-classic/bd2c60ac-7492-417f-ad47-f1fbcf7e8656/active
 36.0 18.0 18.0 0.0 0.0 1108 snapd /var/lib/snapd
  4.0 1.0 1.0 0.0 2.0 1108 snapd /var/lib/snapd/state.json.4l5jJQdW0sP3~
  4.0 1.0 1.0 0.0 2.0 1108 snapd /var/lib/snapd/state.json.4nyNYbyTTJ21~
  4.0 1.0 1.0 0.0 2.0 1108 snapd /var/lib/snapd/state.json.54s0KBgLsdSg~
  4.0 1.0 1.0 0.0 2.0 1108 snapd /var/lib/snapd/state.json.5ydLLQG7tC30~
  4.0 1.0 1.0 0.0 2.0 1108 snapd /var/lib/snapd/state.json.8k0RWXmCjWC8~
  4.0 1.0 1.0 0.0 2.0 1108 snapd /var/lib/snapd/state.json.BHfwQMjR6PSV~
  4.0 1.0 1.0 0.0 2.0 1108 snapd /var/lib/snapd/state.json.CVcTY7Bmfmdm~
  4.0 1.0 1.0 0.0 2.0 1108 snapd /var/lib/snapd/state.json.DpPCvll2h5tG~
  4.0 1.0 1.0 0.0 2.0 1108 snapd /var/lib/snapd/state.json.GKBdLMJLJt5t~
  4.0 1.0 1.0 0.0 2.0 1108 snapd /var/lib/snapd/state.json.R6jlBXQmTWG7~
  4.0 1.0 1.0 0.0 2.0 1108 snapd /var/lib/snapd/state.json.chY4phGDgT2v~
  4.0 1.0 1.0 0.0 2.0 1108 snapd /var/lib/snapd/state.json.djJ2sckH3xK0~
  4.0 1.0 1.0 0.0 2.0 1108 snapd /var/lib/snapd/state.json.gsxWS3XBnxS8~
  4.0 1.0 1.0 0.0 2.0 1108 snapd /var/lib/snapd/state.json.hqsxj4M83rLD~
  4.0 1.0 1.0 0.0 2.0 1108 snapd /var/lib/snapd/state.json.mlfyStjYcShW~
  4.0 1.0 1.0 0.0 2.0 1108 snapd /var/lib/snapd/state.json.mpfTr1J1pjjG~
  4.0 1.0 1.0 0.0 2.0 1108 snapd /var/lib/snapd/state.json.rCTJHYgY9gYh~
  4.0 1.0 1.0 0.0 2.0 1108 snapd ...

Read more...

Revision history for this message
Maciej Borzecki (maciek-borzecki) wrote :

The message states that snapd is updating chromium.

Can you attach the output of `snap change 76` command?

Revision history for this message
bl (blagopoluchie) wrote :

Yes, "updating chromium" with 4Mb/s write non-stop without net traffic...

I yesterday stop it with `sudo systemctl stop snapd`, `sudo kill -9 $(pgrep -fn snapd)`. I don't know if it turned on again later due to the fact that I didn't directly disable snapd.socket.
It seems that the Chromium version has been updated from 85 to 86.

snap change 76

Status Spawn Ready Summary
...
Done 3 days ago, at 14:56 MSK today at 00:12 MSK Run health check of "chromium" snap
Done 3 days ago, at 14:56 MSK today at 00:12 MSK Consider re-refresh of "chromium"
Done today at 00:12 MSK today at 00:12 MSK Подключить chromium:etc-chromium-browser-policies к snapd:system-files
Done today at 00:12 MSK today at 00:12 MSK Подключить chromium:system-packages-doc к snapd:system-packages-doc

How can I see that the download is actually working? For example, by increasing the size of downloaded files or the percentage of completion counter.
Since according to the commands and files above, there is absolutely no visible progress in downloading and traffic.
Could you add such a percent complete counter?

Revision history for this message
Maciej Borzecki (maciek-borzecki) wrote :

It is visible, try running say:

  snap install --no-wait spotify

The commands outputs a change number, then run, this periodically:

  snap change <id>

On my system it shows:

$ snap change 406
Status Spawn Ready Summary
Done today at 13:13 CEST today at 13:13 CEST Ensure prerequisites for "spotify" are available
Doing today at 13:13 CEST - Download snap "spotify" (42) from channel "stable" (0.00%)
Do today at 13:13 CEST - Fetch and check assertions for snap "spotify" (42)
...

$ snap change 406
Status Spawn Ready Summary
Done today at 13:13 CEST today at 13:13 CEST Ensure prerequisites for "spotify" are available
Doing today at 13:13 CEST - Download snap "spotify" (42) from channel "stable" (2.22%)
Do today at 13:13 CEST - Fetch and check assertions for snap "spotify" (42)
...

Revision history for this message
bl (blagopoluchie) wrote :

Now I see. Thank you.

Revision history for this message
bl (blagopoluchie) wrote :

date
Пт 23 окт 2020 08:11:34 MSK

snap changes
ID Status Spawn Ready Summary
79 Doing today at 06:19 MSK - Автоматически обновить пакеты "core", "chromium"

snap change 79
Status Spawn Ready Summary
Done today at 06:19 MSK today at 06:19 MSK Проверка доступности необходимых компонентов "core"
Doing today at 06:19 MSK - Загрузить пакет "core" (10185) с канала "latest/stable" (0.00%)
Do today at 06:19 MSK - Получить и проверить подтверждение для пакета "core" (10185)
...

skipping the minor "Do"s
...
Do today at 06:19 MSK - Start snap "chromium" (1362) services
Do today at 06:19 MSK - Remove data for snap "chromium" (1328)
Do today at 06:19 MSK - Remove snap "chromium" (1328) from the system
Do today at 06:19 MSK - Clean up "chromium" (1362) install
Do today at 06:19 MSK - Run configure hook of "chromium" snap if present
Do today at 06:19 MSK - Run health check of "chromium" snap
Doing today at 06:19 MSK - Consider re-refresh of "core", "chromium"

snap list
chromium 86.0.4240.75 1350 latest/stable canonical✓ -
core 16-2.47 10126 latest/stable canonical✓ core

snap refresh --list
chromium 86.0.4240.111 1362 canonical✓ -
core 16-2.47.1 10185 canonical✓ core

Using 23% of CPU.
ps aux | grep snap
root 1124 23.0 0.2 2925732 35356 ? Ssl 06:19 20:18 /usr/lib/snapd/snapd

The snap using 99% or 65% + 32% (btrfs filesystem) of SSD disk transfer.
sudo atop -D
    PID SYSCPU USRCPU VGROW RGROW RDDSK WRDSK S CPUNR DSK CMD
   1124 0.33s 2.00s 0K 0K 0K 41888K S 4 65% snapd
    363 0.08s 0.00s 0K 0K 0K 20288K S 6 32% btrfs-transact

sudo pidstat -d -T ALL -p `pidof snapd` 5
Linux 5.4.0-52-generic (desktop) 23.10.2020 _x86_64_ (12 CPU)
08:15:18 UID PID kB_rd/s kB_wr/s kB_ccwr/s iodelay Command
08:15:23 0 1124 0,00 4314,57 0,00 0 snapd

How you can see, the snap using: 23% of CPU, ~99% disk transfer, writes 4.3 Mb/s (by `atop` writes 43-50 Mb per 10 seconds), I don't see any net traffic. Download status is "(0.00%)" since for 2 hours I turn on my computer.
That is, snap written on SSD: 2 hour * 60 min * 60 sec * 4.3 Mb/s = 31 Gb for 2 hours!!! And such still almost each day non-stop... WTF???

Revision history for this message
bl (blagopoluchie) wrote :

To stop this SSD killing, I ran `sudo systemctl stop snapd`. The daemon doesn't stop (probably depends on `snapd.socket`).
But then I run `snap change 79` again and see that` core` is installed, and the load percentage of `chromium` starts to update. The packets are installed in a minute.

snap change 79
Status Spawn Ready Summary
Done today at 06:19 MSK today at 06:19 MSK Проверка доступности необходимых компонентов "core"
Done today at 06:19 MSK today at 08:34 MSK Загрузить пакет "core" (10185) с канала "latest/stable"
Done today at 06:19 MSK today at 08:34 MSK Получить и проверить подтверждение для пакета "core" (10185)
Done today at 06:19 MSK today at 08:34 MSK Подключить пакет "core" (10185)
...
Done today at 06:19 MSK today at 08:34 MSK Проверка доступности необходимых компонентов "chromium"
Done today at 06:19 MSK today at 08:36 MSK Загрузить пакет "chromium" (1362) с канала "latest/stable"
Done today at 06:19 MSK today at 08:36 MSK Получить и проверить подтверждение для пакета "chromium" (1362)
Done today at 06:19 MSK today at 08:36 MSK Подключить пакет "chromium" (1362)
Done today at 06:19 MSK today at 08:36 MSK Run pre-refresh hook of "chromium" snap if present
Done today at 06:19 MSK today at 08:36 MSK Stop snap "chromium" services
...
Done today at 06:19 MSK today at 08:36 MSK Consider re-refresh of "core", "chromium"

......................................................................
Consider re-refresh of "core", "chromium"

2020-10-23T08:36:32+03:00 INFO No re-refreshes found.

Do I understand correctly that every time the computer starts, it is necessary to manually stop the snap or try to restart it? To avoid killing SSD and wasting resources?

Revision history for this message
Maciej Borzecki (maciek-borzecki) wrote :

If it is stuck on 0% but still writing to the disk, then it's either getting errors from the network or when starting a download, or downloading the first chunks fails for some reason. In any case, it would be useful to find out why. Specifically, as you noted restarting snapd helps, which may suggest something network state related.

If you can reproduce it, would you be willing to add the following to /etc/environment and restarting snapd?

SNAPD_DEBUG=1
SNAPD_DEBUG_HTTP=7

Then once the problem occurs, please collect the output of journalctl -u snapd.

Perhaps trying to reboot the system and then trying `snap install <some-large-snap eg. xonotic>` would reproduce it.

Revision history for this message
sfc (sfc-0) wrote :

This still happens on 20.10.

Now I'm using a proxy for faster downloads, but snapd is still writing too much to disk (~9MB/s disk writes while ~1.5MB/s downloads).

snapd from journalctl -u with SNAPD_DEBUG=1 SNAPD_DEBUG_HTTP=7 is attached.

Revision history for this message
sfc (sfc-0) wrote :

btw, this only happens when refreshing snaps, but not on installing a new snap.

Revision history for this message
bl (blagopoluchie) wrote :

It seems yes, downloads fine after I restart snap.

After today's restart, it hangs on the update again on downloading an update of "telegram-desktop".
Log of `journalctl -u snapd` with SNAPD_DEBUG=1 SNAPD_DEBUG_HTTP=7 is attached.

Revision history for this message
bl (blagopoluchie) wrote :

After the follow command it got updated. The log is attached.

sudo systemctl stop snapd
Warning: Stopping snapd.service, but it can still be activated by: snapd.socket

Revision history for this message
wolnosc (wolnosc92) wrote :

Still happen on 20.10 in 2021. snapd is reducing lifespan of my SDD :(

Revision history for this message
Kaijia Feng (fengkaijia) wrote :

I believe #1831629 is also describing this issue.

Revision history for this message
Ian Johnson (anonymouse67) wrote :

hi folks, when you provide logs here with SNAPD_DEBUG_HTTP=7 enabled, please take care to remove sensitive information such as your macaroon authorization, which may enable someone to login to your ubuntu one account. For example, any line in the log such as:

X-Device-Authorization: Macaroon root=\"XYZ-ABC-123...\"\r\n

should be scrubbed before uploading. You can do so with a sed command like this:

journalctl --no-pager -u snapd | sed -e 's@Macaroon root=\\\".*\\\"@MACAROON-SCRUBBED@'

I asked LP administrators to delete any such log/comments already existing in this bug report to protect their accounts, but @blagopoluchie and @sfc-0 should both change your passwords.

I will work on making snapd automatically strip this information from the logs that snapd produces so that this is safer for users to use because verbose logging like this is indeed very useful to developers.

thanks

Changed in snapd (Ubuntu):
status: Expired → Confirmed
Revision history for this message
takeseem (takeseem) wrote :

在我们的线上aws ec2出现了这个问题,2天后机器突然重启了。

Revision history for this message
takeseem (takeseem) wrote (last edit ):

may be after updated snapd will trige the bug.

bash # snap changes
ID Status Spawn Ready Summary
48 Error 2 days ago, at 13:34 UTC today at 03:28 UTC Auto-refresh snaps "amazon-ssm-agent", "snapd", "core18"
49 Done today at 03:33 UTC today at 03:33 UTC Auto-refresh snaps "amazon-ssm-agent", "core18"

Revision history for this message
Maciej Borzecki (maciek-borzecki) wrote :

@takeseem unfortunately due to a bug in LP I cannot access the image. Can you rename the image to use latin characters only and upload it again?

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.