gvfs-gdu-volume-monitor high cpu usage during raid-resync

Bug #890337 reported by Tuomas Heino
24
This bug affects 2 people
Affects Status Importance Assigned to Milestone
gnome-disk-utility (Ubuntu)
Confirmed
Medium
Unassigned

Bug Description

During a raid-resync cpu usage is high - in fact gvfs-gdu-volume-monitor was using 100% cpu on one core for the duration of resync. Killed it with -SEGV to get something that can be retraced.

ProblemType: Crash
DistroRelease: Ubuntu 11.04
Package: gvfs 1.8.0-0ubuntu3
ProcVersionSignature: Ubuntu 2.6.38-12.51-generic 2.6.38.8
Uname: Linux 2.6.38-12-generic x86_64
NonfreeKernelModules: nvidia
Architecture: amd64
Date: Sun Nov 13 23:36:38 2011
EcryptfsInUse: Yes
ExecutablePath: /usr/lib/gvfs/gvfs-gdu-volume-monitor
InstallationMedia: Ubuntu 11.04 "Natty Narwhal" - Release amd64 (20110427)
ProcCmdline: /usr/lib/gvfs/gvfs-gdu-volume-monitor
ProcEnviron:
 SHELL=/bin/bash
 LANGUAGE=en_IE:en
 LANG=en_IE.UTF-8
SegvAnalysis:
 Segfault happened at: 0x7fca9329a3c3: jg 0x7fca9329a32e
 PC (0x7fca9329a3c3) ok
 source "0x7fca9329a32e" (0x7fca9329a32e) ok
 SP (0x7fff9b1adac0) ok
 Reason could not be automatically determined.
Signal: 11
SourcePackage: gvfs
StacktraceTop:
 ?? () from /lib/x86_64-linux-gnu/libglib-2.0.so.0
 g_source_attach () from /lib/x86_64-linux-gnu/libglib-2.0.so.0
 g_idle_add_full () from /lib/x86_64-linux-gnu/libglib-2.0.so.0
 ?? () from /usr/lib/libgdu.so.0
 ?? () from /usr/lib/libgdu.so.0
Title: gvfs-gdu-volume-monitor crashed with SIGSEGV in g_source_attach()
UpgradeStatus: No upgrade log present (probably fresh install)
UserGroups: adm admin cdrom dialout libvirtd lpadmin plugdev sambashare

Revision history for this message
Tuomas Heino (iheino+ub) wrote :
Revision history for this message
Apport retracing service (apport) wrote :

StacktraceTop:
 g_source_list_add (source=0x2f85020, context=<optimized out>) at /build/buildd/glib2.0-2.28.6/./glib/gmain.c:870
 g_source_attach_unlocked (source=0x2f85020, context=0x16ff7d0) at /build/buildd/glib2.0-2.28.6/./glib/gmain.c:917
 g_source_attach (source=0x2f85020, context=0x16ff7d0) at /build/buildd/glib2.0-2.28.6/./glib/gmain.c:964
 g_idle_add_full (priority=200, function=0x7fca94365c10 <emit_changed_idle_cb>, data=0x1d37010, notify=0) at /build/buildd/glib2.0-2.28.6/./glib/gmain.c:4607
 find_pvs (vg=0x1d37010) at gdu-linux-lvm2-volume-group.c:256

Revision history for this message
Apport retracing service (apport) wrote : Stacktrace.txt
Revision history for this message
Apport retracing service (apport) wrote : ThreadStacktrace.txt
Changed in gvfs (Ubuntu):
importance: Undecided → Medium
tags: removed: need-amd64-retrace
Revision history for this message
Tuomas Heino (iheino+ub) wrote :
Revision history for this message
Tuomas Heino (iheino+ub) wrote :
visibility: private → public
Revision history for this message
Tuomas Heino (iheino+ub) wrote :

http://bazaar.launchpad.net/~vcs-imports/gnome-disk-utility/master/view/head:/src/gdu/gdu-linux-lvm2-volume-group.c?start_revid=930 has the following comment in find_pvs():

187 /* TODO: do incremental list management instead of recomputing on
188 * each add/remove/change event
189 */

So seems it is recomputing all LVM PVS's on every single change event. Raid resync likely generates a ton of those (resync status %). Do all of those change events get queued and then processed slowly, consuming change events at far lower rate than they are being generated?

affects: gvfs (Ubuntu) → gnome-disk-utility (Ubuntu)
Tuomas Heino (iheino+ub)
tags: added: precise
Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in gnome-disk-utility (Ubuntu):
status: New → Confirmed
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.