umountfs doesn't cleanly unmount / on reboot

Bug #616287 reported by Björn Schließmann
234
This bug affects 41 people
Affects Status Importance Assigned to Milestone
mountall (Ubuntu)
Invalid
High
Unassigned
Lucid
Invalid
Undecided
Unassigned
Maverick
Invalid
Undecided
Unassigned
Natty
Invalid
Undecided
Unassigned
Oneiric
Invalid
High
Unassigned
sysvinit (Ubuntu)
Fix Released
High
Clint Byrum
Lucid
Won't Fix
High
Unassigned
Maverick
Invalid
High
Unassigned
Natty
Fix Released
High
Canonical Foundations Team
Oneiric
Fix Released
High
Clint Byrum

Bug Description

Binary package hint: initscripts

Sometimes (ca. 50% of cases) when rebooting or turning off the computer, the console shows shortly before shutting down (analogous; plymouth disabled):

[...]
unmounting weak filesystems ...
mount: / is busy
[...]

At next reboot, my root filesystem is recognised as needing recovery, e.g.:
[ 2.720951] EXT4-fs (sda5): INFO: recovery required on readonly filesystem
[ 2.721033] EXT4-fs (sda5): write access will be enabled during recovery
[ 4.925307] EXT4-fs (sda5): orphan cleanup on readonly fs
[ 4.939550] EXT4-fs (sda5): ext4_orphan_cleanup: deleting unreferenced inode 16472
[ 4.952641] EXT4-fs (sda5): 1 orphan inode deleted
[ 4.952696] EXT4-fs (sda5): recovery complete
[ 5.717403] EXT4-fs (sda5): mounted filesystem with ordered data mode. Opts: (null)

ProblemType: Bug
DistroRelease: Ubuntu 10.10
Package: initscripts 2.87dsf-4ubuntu17
ProcVersionSignature: Ubuntu 2.6.35-14.20-generic 2.6.35
Uname: Linux 2.6.35-14-generic i686
Architecture: i386
Date: Wed Aug 11 12:17:27 2010
ProcEnviron:
 LANGUAGE=en_GB:en_US:en
 PATH=(custom, no user)
 LANG=de_DE.utf8
 SHELL=/bin/bash
SourcePackage: sysvinit

Revision history for this message
Björn Schließmann (b-schliessmann) wrote :
Revision history for this message
tekkenlord (linuxfever) wrote :

This happens almost constantly to me, using Kubuntu 10.04. It occurs on both my laptop and desktop and I am afraid of data loss or corruption.

Is there any workaround?

Revision history for this message
Claudio Vincenzo (hyperlinx) wrote :

Got this problem too! And futerer it never shut down, just stay there till i press the power off button manualy.

Revision history for this message
tekkenlord (linuxfever) wrote :

Claudio, can you please post the details of your system? Are you using Ubuntu 10.04 or 10.10? Also, which desktop environment do you use? GNOME of KDE? (I have reasons to believe that the problem may be related to KDE 4.5.0)

Revision history for this message
Björn Schließmann (b-schliessmann) wrote :

If there is a KDE related problem or a problem with turning off the computer, it would make sense to open a new bug with all details. I'm not using KDE, BTW.

Revision history for this message
tekkenlord (linuxfever) wrote :
Revision history for this message
wizard10000 (wizard10000) wrote :

Same problem here with Kubuntu 10.04 and KDE 4.5 on both 32- and 64-bit platforms. My machine shuts down every time but FS doesn't unmount cleanly. I added lsof to /etc/init.d/umountfs like this -

... # Unmount local filesystems
lsof > /home/wizard/Documents/lsof.txt
   #
   if [ "$WEAK_MTPTS" ]; then...

and have attached the output.

Revision history for this message
Björn Schließmann (b-schliessmann) wrote :

For me (no KDE relation), it's probably a problem in /etc/init.d/sendsigs. At system shutdowns that end with "mount: / is busy") and an unclean filesystem, I can always spot this message a few lines above:

killall5: omit pid buffer size 16 exceeded!

* I did some tracing, the issue seems to be the last killall5 call which fails, and failure is NOT checked.

* It so happens that killall5 accepts PID for processes it shouldn't kill, but only up to 16 (seems to be configured at compile-time). In sendsigs, the list of PIDs to omit is created as OMITPIDS. It typically has got more than 16 PIDs in it on my system.

Revision history for this message
Björn Schließmann (b-schliessmann) wrote :

I opened an own bug for the upper comment (bug #634460)

Revision history for this message
Corrado Mella (c-mella) wrote :

Happening to me too, running Lucid with KDE 4.5 from the backports PPA.
I can see the
[..]
mount: / is busy
[..]
on shutdown.

It causes a recovery on boot
[..]
Sep 15 08:51:25 Satellite kernel: [ 2.944822] EXT3-fs: INFO: recovery required on readonly filesystem.
Sep 15 08:51:25 Satellite kernel: [ 2.944825] EXT3-fs: write access will be enabled during recovery.
Sep 15 08:51:25 Satellite kernel: [ 7.186038] kjournald starting. Commit interval 5 seconds
Sep 15 08:51:25 Satellite kernel: [ 7.186055] EXT3-fs: recovery complete.
Sep 15 08:51:25 Satellite kernel: [ 7.186835] EXT3-fs: mounted filesystem with ordered data mode.
[..]

Revision history for this message
Bert Voegele (bertvoegele-deactivatedaccount) wrote :

I see the same here (upgraded to 10.10 from 10.04, Gnome) and the system is fscking on _every_ boot. I removed "squash" to see the output during startup/shutdown - unfortunately it's gone to fast to be read. I tried the "lsof" from comment #7, but no file is created. To track this down further, I tried adding a "sleep 120", a "read" or an "echo" there too - but again without any effect. It almost looks like these umount scripts never get called.

Revision history for this message
Bert Voegele (bertvoegele-deactivatedaccount) wrote :

It seems that either bum (boot-up-manager) or rcconf or jobs-admin fiddled with the file names in /etc/rc[0|6].d/ in a way that all links started with "S03". After restoring the appropriate order the umount scripts are called _before_ the halt|reboot script.

As a side note: backintime works as expected.

Revision history for this message
Gard Spreemann (gspreemann) wrote :

Although I have not checked this thoroughly, it seems that logging out of KDE, switching to a text terminal, logging in there and issuing sudo poweroff induces a clean shutdown. Shutting down from within KDE (often?) gives behaviour as described by this bug.

Also: I am affected by this problem even though I don't have the symlink messup Bert Voegele describes in comment #12.

Revision history for this message
tekkenlord (linuxfever) wrote :

A working automated solution I found is to add "restart kdm" (restart gdm for gnome users) to "/etc/rc.local". You may also want to see this:

https://bugs.launchpad.net/ubuntu/+source/sysvinit/+bug/618786

Revision history for this message
Gard Spreemann (gspreemann) wrote :

tekkenlord's workaround does not work for me.

Revision history for this message
Zhang Weiwu (zhangweiwu) wrote :

I can confirm it happens to me with my favorite xdm/fluxbox combination that I use since early this century. In most cases I could not cleanly shut down, seeing the message "file system busy" for "/" before power goes off on my laptop. Using 10.10 64-bit.

Revision history for this message
Alex Engelmann (alex-engelmann) wrote :

I often get the "unmounting weak filesystems" on shutdown or reboot, but only rarely will it check the disks for errors on startup. It isn't causing me any problems, as the system shuts down or reboots normally otherwise. I'm running Ubuntu 10.10 64-bit.

Revision history for this message
Robert Marris (marris-rob) wrote :

I have the same problem. Running 10.10 32 bit, every shutdown has the "mount: / is busy" and "unmounting weak filesystem" messages, and every boot it has to recover the filesystem.

Tried rebooting without GDM running and still get the problem.

My system is pretty much stock, the only thing that makes it out-of-the-norm is I'm running LVM ontop of mdraid.

Revision history for this message
Clint Byrum (clint-fewbar) wrote :

To those who have had this problem thus far.. do you have a separate partition for any parts of your filesystem, like /home, /var, etc.? Can you all post your /etc/fstab?

I am wondering if this is related to an earlier (fixed) bug in mountall that caused systems to not unmount all of the partitions before /

Revision history for this message
Gard Spreemann (gspreemann) wrote :

@Clint Byrum: I'm experiencing the problem both on machines with one big fat /-partition, and on machines that have at least /boot on a different partition.

Revision history for this message
Robert Marris (marris-rob) wrote :

Hi Clint
I've got a seperate /boot and mount an LVM LV at /local

fstab:
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc nodev,noexec,nosuid 0 0
/dev/mapper/bobvg-rootlv1 / ext4 errors=remount-ro 0 1
# boot from either sda1 or sdb1 then pick them up together as a raid1
/dev/md1 /boot ext4 defaults 0 2
mount lv running on md0
/dev/mapper/bobvg-biglv /local ext4 defaults 0 2
# swap was on /dev/sdc1 during installation
UUID=5671ec36-0525-4980-aa6f-ad28d7e43649 none swap sw 0 0
# swap was on /dev/sde1 during installation
UUID=a58ab41b-dfb4-4d97-af30-98eee7f893e2 none swap sw 0 0

Revision history for this message
Clint Byrum (clint-fewbar) wrote :

Given the high number of people who agree this affects them, and duplicate bug reports, I am marking this as Confirmed, and setting the Imporance to High.

I think this may actually be more related to mountall, as sysvinit does not mount things anymore, and so I'm reassigning the bug to mountall.

Changed in sysvinit (Ubuntu):
status: New → Confirmed
importance: Undecided → High
affects: sysvinit (Ubuntu) → mountall (Ubuntu)
Revision history for this message
tekkenlord (linuxfever) wrote :

 I am running Kubuntu 64-bit with only 1 root EXT3 partition and 1 swap, and I am NOT affected by this problem. Perhaps it has to do with the fact that I use EXT3?

Revision history for this message
tekkenlord (linuxfever) wrote :

By the way, I am running Maverick.

Revision history for this message
Gard Spreemann (gspreemann) wrote :

@tekkenlord: I've also experienced the problem on a computer with EXT3.

Revision history for this message
Paul van Berlo (pvanberlo) wrote :

I'm seeing the same on a CLI only install (Ubuntu 10.04 server cd / minimal install). This is actually delaying the installation of a new server for me, since I'm afraid of data corruption. It appears to be what Clint describes, I do not see any issues with a single root partition and swap, the issue started when I added a second EXT4 partition which is mounted somewhere under root.

Revision history for this message
Hans Bull (bullinger) wrote :

Affected on a new 10.10 installatation w/ complete HD encrypted (ext4) on a Samsung NC 10 netbook.

Revision history for this message
Paul van Berlo (pvanberlo) wrote :

It appears my umount issue is caused by two things:

1) sshd no longer stops on lucid, due to an error in the upstart ssh.conf file
2) I did an upgrade of libc6, which caused the issue aswell

Atleast it looks like this is it for me :-)

Revision history for this message
Gard Spreemann (gspreemann) wrote :

@Paul van Berlo. Regarding sshd, are you referring to bug #617515? I don't think they're related, because if I log out of my desktop environment (KDE), switch to a terminal and issue poweroff/reboot from there, I *don't* experience this problem (c.f. comment #13). Surely the SSH daemon should behave the same which ever way I shut down?

Revision history for this message
Robert Marris (marris-rob) wrote :

@Gard Spreemann - rebooting from the shell after stopping my desktop (gdm) doesn't prevent this happening for me. I also tried stopping ssh before a reboot but still hit the problem.

Revision history for this message
Clint Byrum (clint-fewbar) wrote :

It is, of course, possible that something is still leaving filesystems "in use" before reboot, but sshd not dying wouldn't be one of them.. at some point upstart will send SIGKILL if something does not die (the default is 5 seconds after sending SIGTERM). So I doubt that sshd causes this.

Revision history for this message
Paul van Berlo (pvanberlo) wrote :

@Clint Byrum - bug 603363 seems to talk about an issue where sshd is not properly being stopped, this was fixed for maverick, but apparently not for lucid. I ran lsof in the umountroot script, so right before the root fs should be remounted read only. I don't think anything gets killed in there between my lsof and the actual remount. The lsof consistenly shows 2 things running: init and sshd. Modifying the ssh.conf file to stop sshd on anything but runlevel 2345 solves that issue. However, I think we're also still running into bug 188925. It appears that right after a libc6 upgrade it fails to properly remount root because it's busy. This leaves orphaned inodes and possibly some other mess. This doesn't happen on a completely clean install without updates installed. All of this goes away after the initial reboot after updating - so this has to be something related to the updates.

As for if this is completely related to this bug, I'm not sure. I'm just seeing this behaviour and I'm still not 100% sure 'what to blame'. Nonetheless, having a dirty filesystem on a new install (a production server even!) right after doing some initial updates is kind of not making me feel very confident about this ;-)

Revision history for this message
Kris Verbeeck (kris-verbeeck) wrote :

I'm also having this problem. My /home partition (ReiserFS) doesn't cleanly unmount at shutdown, replaying journal at every boot for this partition. I also see the '/ busy' message at shutdown, but don't see any jounral replaying for / at boot up. Anyone any idea on how to capture the shutdown console output? It scrolls by too fast to be able to read everything.

Note that the problem started after an upgrade to Ubuntu 10.10, didn't notice any problems with 10.4.

Revision history for this message
Björn Schließmann (b-schliessmann) wrote :

The problem I reported should be fixed with the following sysvinit update (first bullet, killall5 problem). But it still seems to occur occasionally, I couldn't trace the cause. Any advice how to approach this is welcome.

sysvinit (2.87dsf-4ubuntu18) maverick; urgency=low

  * Allocate pidof/killall5 omitpid buffers dynamically. 16 is too small
    for killall5 now that all Upstart jobs are omitted.
  * Create /lib/init/rw as a symlink to /var/run on new installations, and
    fix it up in /etc/init.d/umountroot on upgrade, as it's difficult to do
    this at any other time; this saves us chasing around all the individual
    packages that use one or the other for sendsigs.omit.d (LP: #541512).
  * Handle /var/run/sendsigs.omit.d explicitly, just in case.

 -- Colin Watson <email address hidden> Fri, 24 Sep 2010 10:48:28 +0100

Revision history for this message
kif0rt (kif0rt) wrote :

The same problem for me. Ubuntu 10.10

Revision history for this message
Dr. Strangelove (richard-strangelove) wrote :

This bug affects me, Lubuntu 10.04.

@ Clint Byrum: I am using a separate /home partition; so I have one partition with the system and one for home. Haven't had this issue in previous versions of Ubuntu.

Revision history for this message
Alexey Molchanov (alexey-molchanov) wrote :

I have two installations of 10.10 and this problem affects both of them.
First machine is a headless server with Ubuntu 10.10 Server installed and the second one is a netbook cleanly installed from 10.10 Alternate CD (it runs GDM and GNOME desktop). Both machines are i386 and use similar disk partitioning scheme: they has separate /boot partition, swap partition and a single LVM physical volume. On LVM I created three logical volumes for /, /home and /var; all of them use ext4 filesystem.
Very often after reboot they start fsck to check the root filesystem, I see the same messages as the original bug reporter during the boot process ("orphan cleanup...", "ext4_orphan_cleanup: deleting unreferenced inode").

Revision history for this message
kif0rt (kif0rt) wrote :

I had noticed if do manually disconnection through network-manager then problem disappears. Maybe I'm just lucky. But it's working for me. Nevertheless, it can not be considered as a solution of the problem because of disadvantages are obvious

Revision history for this message
jon davies (daviesjpuk) wrote :

Same problem here with an install built with debootstrap.

It is the same whether I access root via /dev/sdb1 or via UUID

Revision history for this message
keith gilbert (k-gilbert) wrote :

Same issue here, Ubuntu 10.10 64bit, I have separate /var /home partitions. Happens 95% of the time... hard to not shutdown a laptop, leaving Ubuntu (after 4 years) and installing Arch as 4 months with no fix is rubbish.

Revision history for this message
Clint Byrum (clint-fewbar) wrote :

keith, sorry that you feel this way.

We're actually getting close to a fix for another bug, which I think is probably the same thing... (not sure its a strong enough feeling to mark it duplicate yet)..

see bug #688541 for some suggestions which we're considering to fix this.

In fact, it would be interesting to hear from people who are affected what the result of this command shows:

fgrep "stop on runlevel" /etc/init/*.conf

Yields. It should show quite a few things from the base install, but there might be a common service or type of service which is too slow to shutdown.

Changed in sysvinit (Ubuntu):
status: New → Confirmed
importance: Undecided → High
Revision history for this message
Clint Byrum (clint-fewbar) wrote :

Given that sysvinit does do the unmounting, its actually probably more likely that it is the culprit than mountall.. so I've added a bug task for sysvinit.

Revision history for this message
keith gilbert (k-gilbert) wrote :

fgrep "stop on runlevel" /etc/init/*.conf

/etc/init/acpid.conf:stop on runlevel [!2345]
/etc/init/anacron.conf:stop on runlevel [!2345]
/etc/init/apport.conf:stop on runlevel [!2345]
/etc/init/atd.conf:stop on runlevel [!2345]
/etc/init/cron.conf:stop on runlevel [!2345]
/etc/init/cups.conf:stop on runlevel [016]
/etc/init/dbus.conf:stop on runlevel [06]
/etc/init/failsafe-x.conf:stop on runlevel [06]
/etc/init/gdm.conf:stop on runlevel [016]
/etc/init/irqbalance.conf:stop on runlevel [!2345]
/etc/init/mountall-shell.conf:stop on runlevel [06]
/etc/init/rc.conf:stop on runlevel [!$RUNLEVEL]
/etc/init/rcS.conf:stop on runlevel [!S]
/etc/init/rc-sysinit.conf:stop on runlevel
/etc/init/rsyslog.conf:stop on runlevel [06]
/etc/init/tty1.conf:stop on runlevel [!2345]
/etc/init/tty2.conf:stop on runlevel [!23]
/etc/init/tty3.conf:stop on runlevel [!23]
/etc/init/tty4.conf:stop on runlevel [!23]
/etc/init/tty5.conf:stop on runlevel [!23]
/etc/init/tty6.conf:stop on runlevel [!23]
/etc/init/udev.conf:stop on runlevel [06]
/etc/init/ufw.conf:stop on runlevel [!023456]

Revision history for this message
ingo (ingo-steiner) wrote :

Here same observation onm Lucid-amd64. /var/log/messages:

....
Dec 15 11:58:39 localhost kernel: [ 17.565185] kjournald starting. Commit interval 5 seconds
Dec 15 11:58:39 localhost kernel: [ 17.565226] EXT3-fs: sda6: orphan cleanup on readonly fs
Dec 15 11:58:39 localhost kernel: [ 17.599090] EXT3-fs: sda6: 12 orphan inodes deleted
Dec 15 11:58:39 localhost kernel: [ 17.603934] EXT3-fs: recovery complete.
Dec 15 11:58:39 localhost kernel: [ 17.665705] EXT3-fs: mounted filesystem with ordered data mode.
Dec 15 11:58:39 localhost kernel: [ 32.719906] Adding 1044216k swap on /dev/sda3. Priority:-1 extents:1 across:1044216k
....

and:

 fgrep "stop on runlevel" /etc/init/*.conf

/etc/init/acpid.conf:stop on runlevel [!2345]
/etc/init/anacron.conf:stop on runlevel [!2345]
/etc/init/apport.conf:stop on runlevel [!2345]
/etc/init/atd.conf:stop on runlevel [!2345]
/etc/init/cron.conf:stop on runlevel [!2345]
/etc/init/dbus.conf:stop on runlevel [06]
/etc/init/failsafe-x.conf:stop on runlevel [06]
/etc/init/gdm.conf:stop on runlevel [016]
/etc/init/idmapd.conf:stop on runlevel [06]
/etc/init/irqbalance.conf:stop on runlevel [06]
/etc/init/mountall-shell.conf:stop on runlevel [06]
/etc/init/rc.conf:stop on runlevel [!$RUNLEVEL]
/etc/init/rcS.conf:stop on runlevel [!S]
/etc/init/rc-sysinit.conf:stop on runlevel
/etc/init/rsyslog.conf:stop on runlevel [06]
/etc/init/ssh.conf:stop on runlevel S
/etc/init/tty1.conf:stop on runlevel [!2345]
/etc/init/tty2.conf:stop on runlevel [!23]
/etc/init/tty3.conf:stop on runlevel [!23]
/etc/init/tty4.conf:stop on runlevel [!23]
/etc/init/tty5.conf:stop on runlevel [!23]
/etc/init/tty6.conf:stop on runlevel [!23]
/etc/init/udev.conf:stop on runlevel [06]
/etc/init/ufw.conf:stop on runlevel [!023456]

Revision history for this message
ingo (ingo-steiner) wrote :

Similar on Maverick:
just booted up, installed pending updates and rebooted.

[ 3.891174] EXT4-fs (sda1): INFO: recovery required on readonly filesystem
[ 3.891909] EXT4-fs (sda1): write access will be enabled during recovery
[ 10.432447] EXT4-fs (sda1): orphan cleanup on readonly fs
[ 10.434587] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 136847
[ 10.435122] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 147354
[ 10.435188] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 147410
[ 10.435244] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 147420
[ 10.435298] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 147414
[ 10.435349] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 147418
[ 10.435404] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 147350
[ 10.435601] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 147408
[ 10.435658] EXT4-fs (sda1): 8 orphan inodes deleted
[ 10.436735] EXT4-fs (sda1): recovery complete
[ 10.921192] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null)
[ 20.849897] Adding 407548k swap on /dev/sda5. Priority:-1 extents:1 across:407548k

and:

fgrep "stop on runlevel" /etc/init/*.conf

/etc/init/acpid.conf:stop on runlevel [!2345]
/etc/init/anacron.conf:stop on runlevel [!2345]
/etc/init/apport.conf:stop on runlevel [!2345]
/etc/init/atd.conf:stop on runlevel [!2345]
/etc/init/cron.conf:stop on runlevel [!2345]
/etc/init/cups.conf:stop on runlevel [016]
/etc/init/dbus.conf:stop on runlevel [06]
/etc/init/failsafe-x.conf:stop on runlevel [06]
/etc/init/gdm.conf:stop on runlevel [016]
/etc/init/irqbalance.conf:stop on runlevel [!2345]
/etc/init/mountall-shell.conf:stop on runlevel [06]
/etc/init/rc.conf:stop on runlevel [!$RUNLEVEL]
/etc/init/rcS.conf:stop on runlevel [!S]
/etc/init/rc-sysinit.conf:stop on runlevel
/etc/init/rsyslog.conf:stop on runlevel [06]
/etc/init/tty1.conf:stop on runlevel [!2345]
/etc/init/tty2.conf:stop on runlevel [!23]
/etc/init/tty3.conf:stop on runlevel [!23]
/etc/init/tty4.conf:stop on runlevel [!23]
/etc/init/tty5.conf:stop on runlevel [!23]
/etc/init/tty6.conf:stop on runlevel [!23]
/etc/init/udev.conf:stop on runlevel [06]
/etc/init/ufw.conf:stop on runlevel [!023456]

Revision history for this message
ingo (ingo-steiner) wrote :

Remark:

both cases (Lucid and Maverick) with orphaned inodes happened after upgrade of some packages.
Could it be there is a sync/commit missing on umounting root filesystem?

ingo (ingo-steiner)
tags: added: lucid-amd64
Revision history for this message
Gard Spreemann (gspreemann) wrote :

@ingo: I suspect it's something like that, yeah. I've found that logging out of my desktop (KDE), switching to a terminal, logging in, syncing and rebooting results in far fewer cleanups on reboot (if any, not really sure).

Revision history for this message
ingo (ingo-steiner) wrote :

I just checked my Netbook with Lucid-i386: the very same as Lucid-amd64 reported on 15.12 above:
12 inodes orphaned on 14.12.

Me just came up:

could it be that it happens if data are updated which have been recorded by 'ureadahead' and it was forgotton to reprofile on next reboot?

Revision history for this message
Paul van Berlo (pvanberlo) wrote :

I still believe this has something to do with libc6. I only see this issue after an upgrade of libc6, it appears upstart or sysvinit still references the old version. The issue has been documented on the web a few times, and whatever I do, this only happens after a libc6 upgrade. Subsequent reboots are fine.

Revision history for this message
ingo (ingo-steiner) wrote :

@Paul

I now can confirm your suspicion and reproduce it.
I have here a VM with Lucid-amd64 with a snapshot taken on Dec. 3rd an did following:

Boot VM
check 'cat /var/log/messages | grep orphan' -> no matches

wait ca. 5 minutes till pending updates are reported: 29 alltogether
install all updates with exception of libc-bin, libc-dev-bin, libc6, libc6-dev
system requests "reboot required", so

shutdwon
boot-up again

check 'cat /var/log/messages | grep orphan' -> no matches

install the remaining 4 libc-packages
no reboot requested by system!

shutdown
boot-up again

cat /var/log/messages | grep orphan
Dec 17 12:16:44 lucid kernel: [ 2.976983] EXT3-fs: sda1: orphan cleanup on readonly fs
Dec 17 12:16:44 lucid kernel: [ 2.981618] EXT3-fs: sda1: 8 orphan inodes deleted

And as expected we get the orphaned inodes.

This can be reproduced any time, I just have to revert to the machine's snapshot.
If you need further infomation collected, please let me know - it's really easy now.

Additional remarks:

a) I have purged plymouth completely by using a liberated mountall-package (otherwise I even would not have noticed that bug without watching the boot messages on tty1).

b) This did never happen with Debian-Squeeze - so it is Ubuntu specific

c) shutdown process here is extremely fast, generally 5 sec. or even less

Revision history for this message
Paul van Berlo (pvanberlo) wrote :

I opened Bug #672177 last month to see if this can be fixed. It's currently assigned to the eglibc package. Unfortunately it doesn't seem anyone is interested in resolving it. I'd like to stress that it appears EVERYONE who installs 10.04 or 10.10 will have this issue eventually when they update their packages and a libc update is included.

Revision history for this message
Clint Byrum (clint-fewbar) wrote :

Hi Paul, I went ahead and raised the severity of 672177 to Critical. Since it only happens once after update, I Don't think its the same bug, though it may be the source of some of the reports as people may see it every time they restart after an eglibc update.

Revision history for this message
ingo (ingo-steiner) wrote :

So I got another step further:

the fs corruption happens definitely upon shutdown:

1. upgraded libc6 and also forced ureadahead to reprofile on reboot 'rm /var/lib/ureadahead/pack*' -> orphande inodes

2. Upgraded libc6 and afterwards shut down the VM. Booted up again fronm a Knoppix6.1 DVD and checked filesystem:
 -> 8 corrupted inodes (I type and translate, as I have a screenshot only):

fsck -f /dev/hda1
fsck 1.41.3 (12-Oct-2008)
/dev/hda1: replaying journal
cleaning orphaned Inode 185586 (uid=0, gid=0, mode00100644, size 51712)
... 7 similar lines, all uid0gid=0 follow ...
Pass 1: Checking Inodes
Pass 2:
...

So this definitely indicates that the bug is caused by shutdown. It shows that Journal had not been commited.

So my guess is, that upgrading libc6 is just one (reliable) triger for the root cause "filesystem not cleenly umounted". There may be othe occasion causing the same or even more severe corruptions. libc6 is just fine for tracing the bug.

Revision history for this message
ingo (ingo-steiner) wrote :

Just in case the full output of fsck is needed, I do upload it here.

Maybe this also is responsible for other strange things, like missing Icons, ... after reboot???

Revision history for this message
Clint Byrum (clint-fewbar) wrote :

ingo, I just noticed that your ssh.conf has

stop on runlevel S

That was fixed in maverick under bug #603363, and accepted as an SRU we need to push back to Lucid. That won't *solve* this definitively, as sendsigs/umountfs might still beat openssh in the race condition because of bug #688541, but it should help. Until that SRU is done, you can manually change it to 'stop on runlevel [!2345]'.

Not a solution, but it should reduce the likelihood of preventing umounts from completing.

And to everybody who is participating, thanks for posting your greps, it helps!

Revision history for this message
Tim Cuthbertson (ratcheer) wrote :

I have been having this problem in Maverick for several weeks. It occurs often, but not always, probably about 60% of the time after I shutdown. I run Gnome desktop, /home on a separate partition, and all partitions are ext4. Here is my output from the requested fgrep command:

fgrep "stop on runlevel" /etc/init/*.conf
/etc/init/acpid.conf:stop on runlevel [!2345]
/etc/init/anacron.conf:stop on runlevel [!2345]
/etc/init/apport.conf:stop on runlevel [!2345]
/etc/init/atd.conf:stop on runlevel [!2345]
/etc/init/cron.conf:stop on runlevel [!2345]
/etc/init/cups.conf:stop on runlevel [016]
/etc/init/dbus.conf:stop on runlevel [06]
/etc/init/failsafe-x.conf:stop on runlevel [06]
/etc/init/gdm.conf:stop on runlevel [016]
/etc/init/irqbalance.conf:stop on runlevel [!2345]
/etc/init/mountall-shell.conf:stop on runlevel [06]
/etc/init/mysql.conf:stop on runlevel [016]
/etc/init/rc.conf:stop on runlevel [!$RUNLEVEL]
/etc/init/rcS.conf:stop on runlevel [!S]
/etc/init/rc-sysinit.conf:stop on runlevel
/etc/init/rsyslog.conf:stop on runlevel [06]
/etc/init/tty1.conf:stop on runlevel [!2345]
/etc/init/tty2.conf:stop on runlevel [!23]
/etc/init/tty3.conf:stop on runlevel [!23]
/etc/init/tty4.conf:stop on runlevel [!23]
/etc/init/tty5.conf:stop on runlevel [!23]
/etc/init/tty6.conf:stop on runlevel [!23]
/etc/init/udev.conf:stop on runlevel [06]
/etc/init/ufw.conf:stop on runlevel [!023456]

Revision history for this message
ingo (ingo-steiner) wrote :

I do unsubscribe from this bug for the time beeing. It does not make sense to deal with the symtoms until the root of the evil Bug #672177 is fixed.

Revision history for this message
Tim Cuthbertson (ratcheer) wrote :

In reference to ingo's comment, just above, the problem I am having is completely unrelated to installations of libc packages. It happens most of the time I restart my Maverick system, whether or not any packages have been installed or updated.

Revision history for this message
Kurt R. Rahlfs (kurtrr) wrote :

Well I don't know if this is the same or not. It is the closest I've found for my issue. I have root and swap on a ssd: sda1 (ext2) & sda5 (swap) and home as a link to folder on the fs (ext3) on sdb1. I no longer get the corrupt fs on sda1 that I did however I do get it every time I attempt to resume from hibernate. I still don't get it on a normal shutdown any more. Any comments, requests? Or is this another bug?

Revision history for this message
Martin Havran (martin-havran) wrote :

I dont know If its the same bug, but every boot my / and /home are checked for errors.
Running Ubuntu 10.04 Lucid x86

Im not afraid of data loss because I have backups, but Im afraid of potentional disk damage because of this unclean shutdowns.

Revision history for this message
Garry Leach (garry-leach) wrote :

I have Mythbuntu 10.10 on one PC, & Ubuntu 10.10 on another.

I haven't noticed this bug on my Ubuntu PC (I don't think that I get any close-down messages displayed).

On Mythbuntu PC, I upgraded to 10.10 (from 9.10, via 10.04)in in March 2011.

I have starting seeing this bug recently. I don't regularly watch the closing-down messages, because I close-down just before I go to bed each night.

I have the OS + /home all on the same partition of an 80GB HDD.

I have 2 other HDD (320GB & 1TB); these are mounted on /mnt, & used for all of my recorded TV, plus other videos. I don't remember the age of the 320GB HDD - this could be 3 years old. The 1TB HDD is 1 year old. They are about 50% full.

All HDDs are ext4.

The PC itself is from a new build 3 years ago.

I had been thinking that I needed to replace the 1TB HDD, due to what seems to be disk errors, but this bug report seems to suggest that the problem may be in Ubuntu.

I am getting various odd behaviours, but these are subject to a different bug report:
. pauses during playback
. exit to desktop from within playback (i.e. Myth frontend shuts itself down while paused or at the end of a recording, before I have a chance to delete the file).
. some others

Regards, Garry.

Revision history for this message
Martin Havran (martin-havran) wrote :

Hmm.... I discovered that couchdb plackages causes this error. I login via console and run

sudo fuser -m /dev/sda9 ....... this is my /home partition

I searched PIDs and a 90% of them were *couchdb*

I thinked that they could cause this error so I unistalled - couchdb, python-couchdb and desktopcouch // maybe I forget some package. I can say that I've unistalled all packages were "couchdb" was in name //
This also unistalled Gwibber and gwibber-service beacause of dependencies ,but I didnt use them.

After restart filesystem was checked again, but next restarts were ok, and I didnt have this error anymore.

Revision history for this message
Clint Byrum (clint-fewbar) wrote :

I've targetted the sysvinit portion of this at oneiric alpha 1.

I hope we'll decide to undertake an ambitious plan of moving the entire contents of the sysvinit shutdown to upstart jobs for Oneiric. Meanwhile, sendsigs needs to be SRU'd back to lucid/maverick/natty to stop all of the running jobs, not just kill their pids. So the plan is to fix that in Oneiric early, then do the SRU's from there (even though its possible/likely that sendsigs will subsequently be folded into an upstart job).

Changed in sysvinit (Ubuntu):
milestone: none → oneiric-alpha-1
status: Confirmed → Triaged
Revision history for this message
codewarrior (lvr) wrote :

I believe that this problem is caused by the /etc/init.d/sendsigs script not correctly waiting for terminating Upstart jobs.

I came across this problem on a Maverick system used for MythTV. One night it was left to transcode and advert flag a movie (lots of disk activity) after which it automatically powered off. The next morning I turned on the system and /var didn't mount due to errors.

I kept an eye on the system thereafter thinking it was a disk fault but noticed the same orphaned inode and recovering journal messages on every restart. I found in this case that shutting down mysqld in advance of calling poweroff resolved the problems. However, that made me look further and I realised that mysql was only a problem because it took several seconds or more to shutdown. It looked like there was a race condition in the shutdown logic.

On entering runlevel 0 or 6 (halt or reboot), Upstart delivers a TERM signal to all processes that should stop at that runlevel. The service doesn't have to terminate immediately - some services like mysql take a few seconds to tidy up. An upstart service can define a kill timeout stanza to specify a stop time if it's in excess of 5 seconds. After this time Upstart will deliver a KILL signal.

The problem is that immediately after Upstart sends the TERM signals it starts the rc.conf service to run the SystemV scripts in /etc/rc0.d. One particular script, S20sendsigs, is responsible for TERMinating all remaining processes. However, it's logic is to exclude any Upstart job and so it can exit believing that all processes are dead. Then /etc/rc0.d/S40umountfs unmounts the fllesystem while there are open files and hey-presto orphaned inodes & worse...

The attached patch makes sendsigs wait for any Upstart jobs that are stopping. This fixes all my file corruption problems and since using it I've not had any orphaned inodes.

This has made be question the correctness and safety of umount. I would have thought that it should work OK in the presence of open files, but I'm no ext3 expert.

Revision history for this message
codewarrior (lvr) wrote :

Fix for orphaned inodes

tags: added: patch
Revision history for this message
Clint Byrum (clint-fewbar) wrote : Re: [Bug 616287] Re: umountfs doesn't cleanly unmount / on reboot
Download full text (3.7 KiB)

Excerpts from codewarrior's message of Tue Apr 26 14:34:44 UTC 2011:
> I believe that this problem is caused by the /etc/init.d/sendsigs script
> not correctly waiting for terminating Upstart jobs.
>
> I came across this problem on a Maverick system used for MythTV. One
> night it was left to transcode and advert flag a movie (lots of disk
> activity) after which it automatically powered off. The next morning I
> turned on the system and /var didn't mount due to errors.
>
> I kept an eye on the system thereafter thinking it was a disk fault but
> noticed the same orphaned inode and recovering journal messages on every
> restart. I found in this case that shutting down mysqld in advance of
> calling poweroff resolved the problems. However, that made me look
> further and I realised that mysql was only a problem because it took
> several seconds or more to shutdown. It looked like there was a race
> condition in the shutdown logic.
>
> On entering runlevel 0 or 6 (halt or reboot), Upstart delivers a TERM
> signal to all processes that should stop at that runlevel. The service
> doesn't have to terminate immediately - some services like mysql take a
> few seconds to tidy up. An upstart service can define a kill timeout
> stanza to specify a stop time if it's in excess of 5 seconds. After
> this time Upstart will deliver a KILL signal.
>
> The problem is that immediately after Upstart sends the TERM signals it
> starts the rc.conf service to run the SystemV scripts in /etc/rc0.d.
> One particular script, S20sendsigs, is responsible for TERMinating all
> remaining processes. However, it's logic is to exclude any Upstart job
> and so it can exit believing that all processes are dead. Then
> /etc/rc0.d/S40umountfs unmounts the fllesystem while there are open
> files and hey-presto orphaned inodes & worse...
>
> The attached patch makes sendsigs wait for any Upstart jobs that are
> stopping. This fixes all my file corruption problems and since using it
> I've not had any orphaned inodes.

codewarrior, thanks for taking a stab at this, its something thats on
my TODO list for the next month to solve.

The patch does a nice job of enforcing for upstart jobs the exact same
rules as other processes. On first glance it might look like we'll kill
some processes that we shouldn't, but in fact it does quite a good job
of only TERM/KILL'ing jobs that are already bound for 'stop' status.

It unfortunately causes us to ignore the 'kill timeout' setting that each
job can specify. I'd like to see us respect that and extend sendsigs's
deadline to the maximum of those.

One other minor issue is that it has some races in it. initctl does not
do any kind of locking, it just loops through asking upstart for the
status of each job. So you may end up with jobs that say "start" but,
for whatever reason, are moved to stop right after you asked.

Still that is an imperfection that I think we can live with. This is
also one that we can easily SRU back to lucid/maverick/natty.

I'll target this to Oneiric, and once the fix has dropped there, we
can start the SRU process to all the other active releases.

>
> This has made be question the correctness and safet...

Read more...

Changed in sysvinit (Ubuntu):
assignee: nobody → Clint Byrum (clint-fewbar)
Revision history for this message
codewarrior (lvr) wrote :

I completely agree, this patch is not the final solution - it's just a 'sticking plaster'. Better solutions must include changes to Upstart which ultimately must handle system shutdown internally. Maybe there's a case for adding a shutdown event which then blocks new services & tasks from starting (like shutdown does for logins) and enforces the kill timeout.

I would hate to see a system shutdown 'hanging' in a recovery shell simply because of a few open files. Umount must be made more resilient. Maybe we need a force option - try gently at first and if that fails then at certain critical conditions, like shutdown, the umount can be forced. The issue of open/deleted files can then be addressed after umount is complete.

What worries me most is the damage to the journal that I saw during my first experience with this problem. I ran fsck in manual mode to repair the damage, which it eventually managed, but it left the ext3 disk without a journal. I had to manually fix this by running tune2fs -j.

Revision history for this message
Clint Byrum (clint-fewbar) wrote :

Excerpts from codewarrior's message of Wed Apr 27 08:18:28 UTC 2011:
> I completely agree, this patch is not the final solution - it's just a
> 'sticking plaster'. Better solutions must include changes to Upstart
> which ultimately must handle system shutdown internally. Maybe there's
> a case for adding a shutdown event which then blocks new services &
> tasks from starting (like shutdown does for logins) and enforces the
> kill timeout.
>
> I would hate to see a system shutdown 'hanging' in a recovery shell
> simply because of a few open files. Umount must be made more resilient

Revision history for this message
Peter Júnoš (petoju) wrote :

Problem still appears. This should be Critical bug as Ubuntu does not repair itself (for me) and needs a LiveCD.

Revision history for this message
Stefan Glasenhardt (glasen) wrote :

Any news on this bug? Milestone "oneiric-alpha-1" was reached last week.

Bryce Harrington (bryce)
Changed in sysvinit (Ubuntu):
milestone: oneiric-alpha-1 → oneiric-alpha-2
Changed in sysvinit (Ubuntu):
status: Triaged → In Progress
Revision history for this message
Clint Byrum (clint-fewbar) wrote :

If one creates a python script like this:

#!/usr/bin/python

import signal
import time
import sys
import tempfile

def term_handler(signum, frame):
        time.sleep(15)
        sys.exit(0)

signal.signal(signal.SIGTERM, term_handler)

t=tempfile.TemporaryFile(mode='w')

while time.sleep(3600):
        pass

And runs it with a job file like this (sudo start test after creating the file below)

# /etc/init/test.conf
stop on runlevel [016]

kill timeout 30

exec /path/to/test.py

With the current initscripts version, one will get orphaned inodes because the deleted tempfile was left open on reboot.

With the new version just uploaded (2.87dsf-4ubuntu25) the system should boot cleanly as the partially stopped process will have been killed off.

Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package sysvinit - 2.87dsf-4ubuntu25

---------------
sysvinit (2.87dsf-4ubuntu25) oneiric; urgency=low

  * debian/initscripts/etc/init.d/sendsigs: Only omit jobs that
    are in the 'start' goal. Those that are destined for 'stop' are
    waited on and killed like all other processes. (LP: #616287) Thanks
    to Launchpad user "codewarrior".
 -- Clint Byrum <email address hidden> Tue, 07 Jun 2011 14:06:07 -0700

Changed in sysvinit (Ubuntu):
status: In Progress → Fix Released
Changed in mountall (Ubuntu):
status: Confirmed → Invalid
Changed in mountall (Ubuntu Lucid):
status: New → Invalid
Changed in mountall (Ubuntu Maverick):
status: New → Invalid
Changed in mountall (Ubuntu Natty):
status: New → Invalid
Changed in sysvinit (Ubuntu Lucid):
status: New → Triaged
Changed in sysvinit (Ubuntu Maverick):
status: New → Triaged
Changed in sysvinit (Ubuntu Natty):
status: New → Triaged
Changed in sysvinit (Ubuntu Lucid):
importance: Undecided → High
milestone: none → ubuntu-10.04.3
Changed in sysvinit (Ubuntu Maverick):
importance: Undecided → High
Changed in sysvinit (Ubuntu Natty):
importance: Undecided → High
Changed in sysvinit (Ubuntu Natty):
assignee: nobody → Canonical Foundations Team (canonical-foundations)
Revision history for this message
mevdschee (mevdschee) wrote :

patch works for me (natty) with ext2 on one big root partition, thanks!

Revision history for this message
blitzter47 (blitzter47) wrote :

I'm new in Ubuntu (10.10) and I have this bug. I don't understand how you fix it, as there is fix released. Can someone explain me, please?

Revision history for this message
piwacet (davrosmeglos) wrote :

Curious - is fixing this bug still on the radar for natty? Thanks!

Revision history for this message
Clint Byrum (clint-fewbar) wrote :

piwacet, I think we need to fix it in Natty and all other supported release, yes. However, other things have taken priority over this particular fix. If somebody wants to backport the change from oneiric to 11.04 I'd be quite happy to sponsor that upload. Please see https://wiki.ubuntu.com/StableReleaseUpdates for help on how to get it ready for upload.

Revision history for this message
piwacet (davrosmeglos) wrote :

I had a go at this.

I'm not a technical person, and can not vouch for the technical accuracy of this. Thanks for looking at this, and if I've done this incorrectly, I apologize for wasting people's time.

In a freshly installed virtualbox ubuntu natty, I verified that the procedure in comment #71 does in fact cause this problem, and that making changes to /etc/init.d/sendsigs from this patch:

http://launchpadlibrarian.net/73183009/sysvinit_2.87dsf-4ubuntu24_2.87dsf-4ubuntu25.diff.gz

does fix the problem. (This patch is based on codewarrior's patch from comment #65.)

I made a debdiff based on the natty package "sysvinit - 2.87dsf-4ubuntu23" with the change to /etc/init.d/sendsigs incorporated, following this procedure ("Tutorial 1: Providing a Debdiff"):

https://wiki.ubuntu.com/PackagingGuide/HandsOn

I'll attach the debdiff.

Again, if I've done this incorrectly, sorry for wasting people's time. I could likely make another attempt at this if there is a better way. Thanks!

Revision history for this message
piwacet (davrosmeglos) wrote :
Revision history for this message
piwacet (davrosmeglos) wrote :

Would it have made more sense to make this change in the package 'sysvinit-utils?'

Revision history for this message
Clint Byrum (clint-fewbar) wrote :

Not at all piwacet, this looks great!

I'm going to test it in a natty VM tomorrow, and if it looks good, I'll upload to natty-proposed.

Revision history for this message
piwacet (davrosmeglos) wrote :

I was wondering what the status of this is. I'm happy to test the natty-proposed package. I can test it in both a virtualbox VM and on my laptop on bare metal.

Thanks!

Revision history for this message
Clint Byrum (clint-fewbar) wrote :

piwacet, I'm sorry, with the release of 11.10 coming soon, my time has been limited on SRU's.. I'll try to test this out. In the mean time, I've subscribed ubuntu-sponsors to it, so somebody else may take a crack at it before I can.

Revision history for this message
šumski (schumski-deactivatedaccount-deactivatedaccount) wrote :

This still happens for me on Oneiric. Is there a workaround?

Revision history for this message
šumski (schumski-deactivatedaccount-deactivatedaccount) wrote :

Sorry for the noise, the problem was that as described in comment # 12

Revision history for this message
David Clayton (dcstar) wrote :

I have just manually downloaded and installed the 11.10 package (2.88dsf-13.10ubuntu4) on my 10.04 system and it installed without any dependency issues:

http://packages.ubuntu.com/oneiric/sysvinit-utils

Revision history for this message
David Clayton (dcstar) wrote :

I have just solved my unclean unmounting issue, the solution was incredibly simple:

http://ubuntuforums.org/showpost.php?p=11253379&postcount=4

Revision history for this message
Clint Byrum (clint-fewbar) wrote :

David, I'm not sure what broke your /etc/rc0.d, but S03 is definitely *not* where it should normally be:

lrwxrwxrwx 1 root root 17 2011-03-29 21:23 K09apache2 -> ../init.d/apache2
lrwxrwxrwx 1 root root 19 2011-03-17 15:25 K20denyhosts -> ../init.d/denyhosts
lrwxrwxrwx 1 root root 28 2011-05-02 04:39 K20gearman-job-server -> ../init.d/gearman-job-server
lrwxrwxrwx 1 root root 19 2011-05-02 04:33 K20memcached -> ../init.d/memcached
lrwxrwxrwx 1 root root 24 2011-05-09 22:56 K20mk-slave-delay -> ../init.d/mk-slave-delay
lrwxrwxrwx 1 root root 17 2011-04-08 19:05 K20postfix -> ../init.d/postfix
-rw-r--r-- 1 root root 353 2011-06-08 18:32 README
lrwxrwxrwx 1 root root 29 2011-03-15 06:55 S10unattended-upgrades -> ../init.d/unattended-upgrades
lrwxrwxrwx 1 root root 22 2011-03-15 06:59 S15wpa-ifupdown -> ../init.d/wpa-ifupdown
lrwxrwxrwx 1 root root 18 2011-03-15 06:48 S20sendsigs -> ../init.d/sendsigs
lrwxrwxrwx 1 root root 17 2011-03-15 06:48 S30urandom -> ../init.d/urandom
lrwxrwxrwx 1 root root 22 2011-03-15 06:48 S31umountnfs.sh -> ../init.d/umountnfs.sh
lrwxrwxrwx 1 root root 20 2011-03-15 06:48 S35networking -> ../init.d/networking
lrwxrwxrwx 1 root root 18 2011-03-15 06:48 S40umountfs -> ../init.d/umountfs
lrwxrwxrwx 1 root root 20 2011-03-15 06:48 S60umountroot -> ../init.d/umountroot
lrwxrwxrwx 1 root root 14 2011-03-15 06:48 S90halt -> ../init.d/halt

Revision history for this message
David Clayton (dcstar) wrote :

It seems to be an issue with 10.04 systems, even a brand new 10.04.3 desktop install I made last week had it. I have subsequently found that others have reported it:

https://bugs.launchpad.net/ubuntu/+source/sysvinit/+bug/739007

Revision history for this message
Steve Langasek (vorlon) wrote :

On Sat, Oct 01, 2011 at 07:47:11AM -0000, David Clayton wrote:
> It seems to be an issue with 10.04 systems, even a brand new 10.04.3
> desktop install I made last week had it. I have subsequently found that
> others have reported it:

> https://bugs.launchpad.net/ubuntu/+source/sysvinit/+bug/739007

That bug is about using insserv. Did you reconfigure your system to use
insserv?

insserv is not supported on Ubuntu.

--
Steve Langasek Give me a lever long enough and a Free OS
Debian Developer to set it on, and I can move the world.
Ubuntu Developer http://www.debian.org/
<email address hidden> <email address hidden>

Revision history for this message
David Clayton (dcstar) wrote :

I had believed that particular bug refers to the sysvinit package, in any case it was still there in a BRAND NEW 10.04.3 install I did last week, and I don't recall installing anything out of the ordinary. I have seen many posts in the Ubuntu forums about people with "dirty" filesystems after a shutdown/reboot and this sort of problem will explain it.

Revision history for this message
Ruben (info-rubenfelix) wrote :

Hey!

Bedankt voor je mail! Ik ben er even tussenuit geknepen naar een lekker warm land! Ik beantwoord je mail na mijn vakantie (11 oktober).

Groetjes!

Ruben

Revision history for this message
Clint Byrum (clint-fewbar) wrote :

Nobody is saying that this bug does not exist. Its well known to exist and has a known fix. The time to backport, test, and upload said fixes will be more abundant after the 11.10 release which is just 2 weeks away now.

Revision history for this message
David Clayton (dcstar) wrote :

Ok, I have now found what was breaking my scripts - VMware Workstation 8/Player - I verified that the install process of this program changed them on my fresh 10.04 test system. I have reported it to the VMware forums.

Revision history for this message
Clint Byrum (clint-fewbar) wrote :

piwacet, finally got around to testing out your debdiff. Looks good, uploaded to natty-proposed

Revision history for this message
Martin Pitt (pitti) wrote : Please test proposed package

Hello Björn, or anyone else affected,

Accepted sysvinit into natty-proposed, the package will build now and be available in a few hours. Please test and give feedback here. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Thank you in advance!

Changed in sysvinit (Ubuntu Natty):
status: Triaged → Fix Committed
tags: added: verification-needed
Revision history for this message
piwacet (davrosmeglos) wrote :

I'll try to give this a test in the next week if that time frame works.

Thanks.

Revision history for this message
K1773R (k1773r) wrote :

any plans on fixing this for 10.10 maverick?

greetings

Revision history for this message
Fernando Carlos de Sousa (fernandoanatomia) wrote :

I'm seeing this bug in Kubuntu 11.10 64 bit.

Fresh install and after regular updates and installing basic programs from the official repository and medibuntu.

Revision history for this message
piwacet (davrosmeglos) wrote :

I think I'm confused about something. Neither Natty nor Natty-proposed seem to have a package "sysvinit," but they do have "sysvinit-utils."

So from natty-proposed I installed the package sysvinit-utils_2.87dsf-4ubuntu23.1_amd64.deb.

This did not make the change from the diff to /etc/init.d/sendsigs, and it did not fix the original problem of orphaned inodes.

I do see the packages "sysv-rc" and "initscripts" in natty-proposed that have suspiciously similar version numbers to "sysvinit-utils."

I'll install both of these packages from natty-proposed and report back.

Revision history for this message
Steve Langasek (vorlon) wrote : Re: [Bug 616287] Re: umountfs doesn't cleanly unmount / on reboot

On Wed, Oct 26, 2011 at 10:46:56PM -0000, piwacet wrote:
> I think I'm confused about something. Neither Natty nor Natty-proposed
> seem to have a package "sysvinit," but they do have "sysvinit-utils."

sysvinit is the source package name.

> So from natty-proposed I installed the package sysvinit-utils_2.87dsf-
> 4ubuntu23.1_amd64.deb.

> This did not make the change from the diff to /etc/init.d/sendsigs, and
> it did not fix the original problem of orphaned inodes.

> I do see the packages "sysv-rc" and "initscripts" in natty-proposed that
> have suspiciously similar version numbers to "sysvinit-utils."

> I'll install both of these packages from natty-proposed and report back.

Yep, that's the right thing to do here.

--
Steve Langasek Give me a lever long enough and a Free OS
Debian Developer to set it on, and I can move the world.
Ubuntu Developer http://www.debian.org/
<email address hidden> <email address hidden>

Revision history for this message
piwacet (davrosmeglos) wrote :

OK. It's working.

I installed a fully up-to-date natty 64 bit desktop in both a virtualbox and on bare metal on my old macbook 2,1.

Again I verified that the procedure in comment #71 does in fact produce the problem with orphaned inodes.

From natty-proposed I installed these packages:

sysvinit-utils_2.87dsf-4ubuntu23.1_amd64.deb
sysv-rc_2.87dsf-4ubuntu23.1_all.deb
initscripts_2.87dsf-4ubuntu23.1_amd64.deb

This does make the changes to /etc/init.d/sendsigs from the patch, and it does fix the problem with orphaned inodes (tested on both the macbook and virtualbox)

I don't use these installations much, so I can not tell if these updates cause other unexpected problems.

Thanks!

Martin Pitt (pitti)
tags: added: verification-done
removed: verification-needed
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package sysvinit - 2.87dsf-4ubuntu23.1

---------------
sysvinit (2.87dsf-4ubuntu23.1) natty-proposed; urgency=low

  * debian/initscripts/etc/init.d/sendsigs: Only omit jobs that
    are in the 'start' goal. Those that are destined for 'stop' are
    waited on and killed like all other processes. (LP: #616287) Thanks
    to Launchpad user "codewarrior".
 -- Bill Cahill <email address hidden> Tue, 13 Sep 2011 22:41:05 -0700

Changed in sysvinit (Ubuntu Natty):
status: Fix Committed → Fix Released
Revision history for this message
Clint Byrum (clint-fewbar) wrote :

Since this was already fixed in natty, and there are no other patches attached, unsubscribing ubuntu-sponsors.

Revision history for this message
JC Hulce (soaringsky) wrote :

This bug affects Ubuntu 10.10, Maverick Meerkat. Maverick has reached end-of-life and is no longer supported, so I am closing the bugtask for Maverick. Please upgrade to a newer version of Ubuntu.
More information here: https://lists.ubuntu.com/archives/ubuntu-announce/2012-April/000158.html

Changed in sysvinit (Ubuntu Maverick):
status: Triaged → Invalid
Revision history for this message
Alexander (lxandr) wrote :

Also affects quantal 12.10.
Please, see my comments here: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1073433

Alexander (lxandr)
tags: added: quantal
Revision history for this message
sghpunk (sgh-mail) wrote :

Affects quantal 12.04.
Seems like it is similar to this bug: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1073433

tags: added: precise
Revision history for this message
sghpunk (sgh-mail) wrote :

Affects precise 12.04.
Seems like it is similar to this bug: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1073433

Revision history for this message
Rolf Leggewie (r0lf) wrote :

lucid has seen the end of its life and is no longer receiving any updates. Marking the lucid task for this ticket as "Won't Fix".

Changed in sysvinit (Ubuntu Lucid):
status: Triaged → Won't Fix
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.