grub.cfg not updated; Segmentation fault displayed
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
grub2 (Ubuntu) |
Expired
|
Undecided
|
Unassigned |
Bug Description
Binary package hint: grub2
Karmic
1.97~beta4-
Running update-grub should update grub.cfg. Installing new kernel failed to configure grub.cfg:
root@a1-
Reading package lists... Done
Building dependency tree
Reading state information... Done
linux-generic is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
3 not fully installed or removed.
After this operation, 0B of additional disk space will be used.
Setting up linux-image-
Running depmod.
update-initramfs: Generating /boot/initrd.
Running postinst hook script /usr/sbin/
Generating grub.cfg ...
Segmentation fault
User postinst hook script [/usr/sbin/
dpkg: error processing linux-image-
subprocess installed post-installation script returned error exit status 139
dpkg: dependency problems prevent configuration of linux-image-
linux-
Package linux-image-
dpkg: error processing linux-image-generic (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of linux-generic:
linux-generic depends on linux-image-generic (= 2.6.31.21.34); however:
Package linux-image-generic is not configured yet.
dpkg: error processing linux-generic (--configure):
dependency problems - leaving unconfigured
No apport report written because the error message indicates its a followup error from a previous failure.
linux-
linux-
linux-generic
E: Sub-process /usr/bin/dpkg returned an error code (1)
Investigating the update-grub call I find that
grub-probe --device /dev/mapper/
is failing with a Segmentation fault.
I have my pv's on raid1 drives. I've recently added another pair (sdc and sdd):
root@a1-
--- Logical volume ---
LV Name /dev/vg.
VG Name vg.sys
LV UUID u0cSQw-
LV Write Access read/write
LV Status available
# open 0
LV Size 7.50 GB
Current LE 240
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:6
--- Segments ---
Logical extent 0 to 239:
Type linear
Physical volume /dev/md9
Physical extents 392 to 631
root@a1-
PV VG Fmt Attr PSize PFree
/dev/md10 vg.bak lvm2 a- 15.69G 5.22G
/dev/md11 vg.bak lvm2 a- 15.69G 0
/dev/md12 vg.bak lvm2 a- 7.84G 0
/dev/md13 vg.sys lvm2 a- 5.38G 5.38G
/dev/md14 vg.sys lvm2 a- 224.00M 224.00M
/dev/md15 vg.sys lvm2 a- 3.78G 3.78G
/dev/md16 vg.sys lvm2 a- 3.78G 3.78G
/dev/md17 vg.sys lvm2 a- 231.88G 231.88G
/dev/md18 vg.sys lvm2 a- 231.88G 231.88G
/dev/md19 vg.sys lvm2 a- 231.88G 231.88G
/dev/md2 vg.sys lvm2 a- 3.91G 0
/dev/md20 vg.sys lvm2 a- 231.88G 231.88G
/dev/md21 vg.sys lvm2 a- 231.88G 167.88G
/dev/md22 vg.sys lvm2 a- 231.88G 231.88G
/dev/md23 vg.ba2 lvm2 a- 231.88G 231.88G
/dev/md24 vg.ba2 lvm2 a- 231.91G 131.91G
/dev/md4 vg.sys lvm2 a- 15.69G 2.00G
/dev/md5 vg.sys lvm2 a- 31.38G 10.09G
/dev/md6 vg.sys lvm2 a- 31.38G 17.38G
/dev/md7 vg.sys lvm2 a- 31.38G 0
/dev/md8 vg.sys lvm2 a- 31.38G 6.00G
/dev/md9 vg.sys lvm2 a- 31.38G 4.00G
root@a1-
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md10 : active raid1 sdb12[0] sda12[1]
16450432 blocks [2/2] [UU]
md1 : active raid1 sdb2[0] sda2[1]
4064320 blocks [2/2] [UU]
md0 : active raid1 sda1[1] sdb1[0]
64128 blocks [2/2] [UU]
md11 : active raid1 sda13[1] sdb13[0]
16450432 blocks [2/2] [UU]
md4 : active raid1 sdb6[0] sda6[1]
16450432 blocks [2/2] [UU]
md7 : active raid1 sdb9[0] sda9[1]
32900992 blocks [2/2] [UU]
md3 : active raid1 sda5[1] sdb5[0]
8225152 blocks [2/2] [UU]
md13 : active raid1 sdb15[0] sda15[1]
5662784 blocks [2/2] [UU]
md12 : active raid1 sdb14[0] sda14[1]
8225152 blocks [2/2] [UU]
md2 : active raid1 sdb3[0] sda3[1]
4096448 blocks [2/2] [UU]
md5 : active raid1 sda7[1] sdb7[0]
32900992 blocks [2/2] [UU]
md8 : active raid1 sda10[1] sdb10[0]
32900992 blocks [2/2] [UU]
md6 : active raid1 sdb8[0] sda8[1]
32900992 blocks [2/2] [UU]
md9 : active raid1 sdb11[0] sda11[1]
32900992 blocks [2/2] [UU]
md17 : active raid1 sdd5[1] sdc5[0]
243159744 blocks [2/2] [UU]
md14 : active raid1 sdc1[0] sdd1[1]
248896 blocks [2/2] [UU]
md19 : active raid1 sdd7[1] sdc7[0]
243159744 blocks [2/2] [UU]
md24 : active raid1 sdd12[1] sdc12[0]
243175808 blocks [2/2] [UU]
md23 : active raid1 sdd11[1] sdc11[0]
243159744 blocks [2/2] [UU]
md16 : active raid1 sdd3[1] sdc3[0]
3984000 blocks [2/2] [UU]
md15 : active raid1 sdd2[1] sdc2[0]
3984000 blocks [2/2] [UU]
md20 : active raid1 sdd8[1] sdc8[0]
243159744 blocks [2/2] [UU]
md22 : active raid1 sdd10[1] sdc10[0]
243159744 blocks [2/2] [UU]
md21 : active raid1 sdd9[1] sdc9[0]
243159744 blocks [2/2] [UU]
md18 : active raid1 sdd6[1] sdc6[0]
243159744 blocks [2/2] [UU]
unused devices: <none>
Using gdb:
root@a1-
GNU gdb (GDB) 7.0-ubuntu
Copyright (C) 2009 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "i486-linux-gnu".
For bug reporting instructions, please see:
<http://
Reading symbols from /usr/sbin/
(gdb) run --device /dev/mapper/
Starting program: /usr/sbin/
Program received signal SIGSEGV, Segmentation fault.
0x080493db in ?? ()
(gdb) bt
#0 0x080493db in ?? ()
#1 0x080498c1 in ?? ()
#2 0x00157b56 in __libc_start_main () from /lib/tls/
#3 0x080490d1 in ?? ()
(gdb) quit
A debugging session is active.
Inferior 1 [process 32293] will be killed.
Quit anyway? (y or n) y
ProblemType: Bug
Architecture: i386
Date: Sun May 2 00:00:22 2010
DistroRelease: Ubuntu 9.10
InstallationMedia: Ubuntu 9.10 "Karmic Koala" - Release i386 (20091028.5)
Package: grub-common 1.97~beta4-
ProcEnviron:
PATH=(custom, no user)
LANG=en_GB.UTF-8
SHELL=/bin/bash
ProcVersionSign
SourcePackage: grub2
Uname: Linux 2.6.31-20-generic i686
Today I ran the Lucid live CD. grub-probe from the CD does not exhibit this behaviour.