XFS internal error xfs_trans_cancel at line 1164 of file /build/buildd/linux-2.6.27/fs/xfs/xfs_trans.c

Bug #294259 reported by Brian J. Murrell
30
This bug affects 4 people
Affects Status Importance Assigned to Milestone
linux (Ubuntu)
Won't Fix
Undecided
Unassigned

Bug Description

Got this on my Intrepid (2.6.27-7) last night:

Nov 4 17:25:55 pc kernel: [10142.198218] Filesystem "dm-25": XFS internal error xfs_trans_cancel at line 1164 of file /build/buildd/linux-2.6.27/fs/xfs/xfs_trans.c. Caller 0xf95840c9
Nov 4 17:25:56 pc kernel: [10142.198240] Pid: 22487, comm: rsync Tainted: P 2.6.27-7-generic #1
Nov 4 17:25:56 pc kernel: [10142.198267] [<f955cbc3>] xfs_error_report+0x53/0x60 [xfs]
Nov 4 17:25:56 pc kernel: [10142.198358] [<f95840c9>] ? xfs_link+0x159/0x2d0 [xfs]
Nov 4 17:25:56 pc kernel: [10142.198384] [<f957d4e2>] xfs_trans_cancel+0xd2/0xf0 [xfs]
Nov 4 17:25:56 pc kernel: [10142.198442] [<f95840c9>] ? xfs_link+0x159/0x2d0 [xfs]
Nov 4 17:25:56 pc kernel: [10142.198465] [<f95840c9>] xfs_link+0x159/0x2d0 [xfs]
Nov 4 17:25:56 pc kernel: [10142.198484] [<c01c6a59>] ? __iget+0x9/0x60
Nov 4 17:25:56 pc kernel: [10142.198508] [<f958f793>] xfs_vn_link+0x43/0xa0 [xfs]
Nov 4 17:25:56 pc kernel: [10142.198543] [<c01bb81c>] vfs_link+0xfc/0x170
Nov 4 17:25:56 pc kernel: [10142.198546] [<c01bd908>] sys_linkat+0xf8/0x110
Nov 4 17:25:56 pc kernel: [10142.198550] [<c01b5df7>] ? sys_lstat64+0x27/0x30
Nov 4 17:25:56 pc kernel: [10142.198557] [<c01781a4>] ? __report_bad_irq+0x14/0xa0
Nov 4 17:25:56 pc kernel: [10142.198564] [<c01bd955>] sys_link+0x35/0x40
Nov 4 17:25:56 pc kernel: [10142.198567] [<c0103f7b>] sysenter_do_call+0x12/0x2f
Nov 4 17:25:56 pc kernel: [10142.198572] =======================
Nov 4 17:25:56 pc kernel: [10142.198576] xfs_force_shutdown(dm-25,0x8) called from line 1165 of file /build/buildd/linux-2.6.27/fs/xfs/xfs_trans.c. Return address = 0xf957d4fa
Nov 4 17:25:56 pc kernel: [10142.199754] Filesystem "dm-25": Corruption of in-memory data detected. Shutting down filesystem: dm-25
Nov 4 17:25:56 pc kernel: [10142.199761] Please umount the filesystem, and rectify the problem(s)
Nov 4 17:25:56 pc kernel: [10142.376160] Filesystem "dm-25": xfs_log_force: error 5 returned.

I did unmount the fileystems and xfs_repair'd it, which advised that I remount to replay the log and then try to repair again which I did and after some number of minutes the filesystem was fixed and I could remount it.

Revision history for this message
Ronald van Engelen (ronalde) wrote :

Same error occured here; without any hints (not out of ram no other errors) the filesystem was shutdown.

System details:
 * hardy 2.6.24-16-server x86_64 (2.6.24-16.30), xfsprogs 2.9.4-2, lvm2 2.02.26-1ubuntu9
 * dual 3ware 9660 SAS-controllers in raid 6, coupled together in dm raid0

After the error below occured, I rebooted the system without any errors (even no xfs check messages).

[427206.973979] Filesystem "dm-1": XFS internal error xfs_trans_cancel at line 1163 of file /build/buildd/linux-2.6.24/fs/xfs/xfs_trans.c. Caller 0xffffffff882a2997
[427206.974057] Pid: 5736, comm: nfsd Not tainted 2.6.24-16-server #1
[427206.974059]
[427206.974060] Call Trace:
[427206.974094] [<ffffffff882a2997>] :xfs:xfs_create+0x1e7/0x520
[427206.974117] [<ffffffff8829a3b4>] :xfs:xfs_trans_cancel+0x104/0x130
[427206.974140] [<ffffffff882a2997>] :xfs:xfs_create+0x1e7/0x520
[427206.974163] [<ffffffff882ae2c7>] :xfs:xfs_vn_mknod+0x1b7/0x300
[427206.974178] [<ffffffff882ad6f5>] :xfs:xfs_vn_permission+0x15/0x20
[427206.974189] [<ffffffff802bec36>] vfs_create+0x106/0x1a0
[427206.974202] [<ffffffff885833cc>] :nfsd:nfsd_create_v3+0x37c/0x490
[427206.974216] [<ffffffff8858a617>] :nfsd:nfsd3_proc_create+0x127/0x1c0
[427206.974227] [<ffffffff8857c271>] :nfsd:nfsd_dispatch+0xb1/0x240
[427206.974246] [<ffffffff884d5dad>] :sunrpc:svc_process+0x47d/0x7e0
[427206.974252] [<ffffffff80236260>] default_wake_function+0x0/0x10
[427206.974258] [<ffffffff80470552>] __down_read+0x12/0xb1
[427206.974267] [<ffffffff8857c810>] :nfsd:nfsd+0x0/0x2e0
[427206.974274] [<ffffffff8857c99f>] :nfsd:nfsd+0x18f/0x2e0
[427206.974282] [<ffffffff8020d198>] child_rip+0xa/0x12
[427206.974289] [<ffffffff8857c810>] :nfsd:nfsd+0x0/0x2e0
[427206.974302] [<ffffffff8857c810>] :nfsd:nfsd+0x0/0x2e0
[427206.974305] [<ffffffff8020d18e>] child_rip+0x0/0x12
[427206.974309]
[427206.974312] xfs_force_shutdown(dm-1,0x8) called from line 1164 of file /build/buildd/linux-2.6.24/fs/xfs/xfs_trans.c. Return address = 0xffffffff8829a3cd
[427206.974318] Filesystem "dm-1": Corruption of in-memory data detected. Shutting down filesystem: dm-1
[427206.974366] Please umount the filesystem, and rectify the problem(s)

Revision history for this message
Brian J. Murrell (brian-interlinx) wrote :
Download full text (3.4 KiB)

Damnit. Just got another one of these. Is nobody from Ubuntu going to even triage this report? This is a very serious bug for anyone using the XFS filesystem.

Jan 26 12:59:32 pc kernel: [678572.673862] Pid: 32373, comm: rsync Tainted: P 2.6.27-10-generic #1
Jan 26 12:59:32 pc kernel: [678572.673876] [<f9532bc3>] xfs_error_report+0x53/0x60 [xfs]
Jan 26 12:59:32 pc kernel: [678572.673942] [<f955a0c9>] ? xfs_link+0x159/0x2d0 [xfs]
Jan 26 12:59:32 pc kernel: [678572.673967] [<f95534e2>] xfs_trans_cancel+0xd2/0xf0 [xfs]
Jan 26 12:59:32 pc kernel: [678572.674001] [<f955a0c9>] ? xfs_link+0x159/0x2d0 [xfs]
Jan 26 12:59:32 pc kernel: [678572.674025] [<f955a0c9>] xfs_link+0x159/0x2d0 [xfs]
Jan 26 12:59:32 pc kernel: [678572.674044] [<c01c6ca9>] ? __iget+0x9/0x60
Jan 26 12:59:32 pc kernel: [678572.674066] [<f9565793>] xfs_vn_link+0x43/0xa0 [xfs]
Jan 26 12:59:32 pc kernel: [678572.674099] [<c01bba6c>] vfs_link+0xfc/0x170
Jan 26 12:59:32 pc kernel: [678572.674102] [<c01bdb58>] sys_linkat+0xf8/0x110
Jan 26 12:59:32 pc kernel: [678572.674106] [<c01b6047>] ? sys_lstat64+0x27/0x30
Jan 26 12:59:32 pc kernel: [678572.674112] [<c01bdba5>] sys_link+0x35/0x40
Jan 26 12:59:32 pc kernel: [678572.674115] [<c0103f7b>] sysenter_do_call+0x12/0x2f
Jan 26 12:59:32 pc kernel: [678572.674126] =======================
Jan 26 12:59:32 pc kernel: [678572.674131] xfs_force_shutdown(dm-23,0x8) called :
from line 1165 of file /build/buildd/linux-2.6.27/fs/xfs/xfs_trans.c. Return address = 0xf95534fa
Jan 26 12:59:33 pc kernel: [678573.004030] Filesystem "dm-23": xfs_log_force: error 5 returned.
Jan 26 12:59:55 pc kernel: [678595.128029] Filesystem "dm-23": xfs_log_force: error 5 returned.
Jan 26 13:00:31 pc kernel: [678631.149032] Filesystem "dm-23": xfs_log_force: error 5 returned.
Jan 26 13:01:07 pc kernel: [678667.148030] Filesystem "dm-23": xfs_log_force: error 5 returned.
Jan 26 13:01:43 pc kernel: [678703.152128] Filesystem "dm-23": xfs_log_force: error 5 returned.
Jan 26 13:02:19 pc kernel: [678739.156075] Filesystem "dm-23": xfs_log_force: error 5 returned.
Jan 26 13:02:55 pc kernel: [678775.157023] Filesystem "dm-23": xfs_log_force: error 5 returned.
Jan 26 13:03:31 pc kernel: [678811.161180] Filesystem "dm-23": xfs_log_force: error 5 returned.
Jan 26 13:04:07 pc kernel: [678847.161053] Filesystem "dm-23": xfs_log_force: error 5 returned.
Jan 26 13:04:43 pc kernel: [678883.160029] Filesystem "dm-23": xfs_log_force: error 5 returned.
Jan 26 13:05:19 pc kernel: [678919.161025] Filesystem "dm-23": xfs_log_force: error 5 returned.
Jan 26 13:05:55 pc kernel: [678955.161048] Filesystem "dm-23": xfs_log_force: error 5 returned.
Jan 26 13:06:31 pc kernel: [678991.161034] Filesystem "dm-23": xfs_log_force: error 5 returned.
Jan 26 13:07:07 pc kernel: [679027.165621] Filesystem "dm-23": xfs_log_force: error 5 returned.
Jan 26 13:07:43 pc kernel: [679063.165061] Filesystem "dm-23": xfs_log_force: error 5 returned.
Jan 26 13:08:19 pc kernel: [679099.165064] Filesystem "dm-23": xfs_log_force: error 5 returned.
Jan 26 13:08:55 pc kernel: [679135.164060] Filesystem "dm-23": xfs_log_force: error 5 returned.
Jan 26 13:09:31 pc kernel: [679171.164062] Filesystem "...

Read more...

Revision history for this message
Ronald van Engelen (ronalde) wrote :
Download full text (7.9 KiB)

Some updates which might explain the failure I've seen.

During december (about a month after my post) our server shut itself down two times at night. Furthermore, nfs-kernel-server disappeared each day. While we were analyzing this, the server finally shutdown again one night in january and didn't start up anymore.

Supermicro engineers concluded the motherboard or power distribution unit (it has two redundant power units) had defects. After replacing the case with pdu (supermicro SC216) and motherboard (supermicro x7dbu) we seemed to have solved the problems until the nfs-kernel-server processes started to disappear again on a daily basis without a clue.

After moving the nfs server services to another host (with intrepid server amd64) and mounting the problematic lvm2 logical volume through aoe on the new host, we saw a host of other nfs and xfs-related problems (see below).

After stopping nfsd and umounting the filesystem, I was instructed to run `xfs_repair -L /dev/etherd/e5.1` to clear the logs (this showed numerous errors), which I did and remounted the filesystem (without errors). `xfs_check /dev/etherd/e5.1` did find countless problems however.

My guess is that the filesystem got corrupted during the hardware failures. I will try to make a fresh backup of the data on the filesystem and run `xfs_repair` on it; when that fails I will remake the filesystem and restore the data on it to see if this finally fixes things. I'll post updates in this thread.

BTW:
 * in a thread on the xfs mailinglist (http://oss.sgi.com/archives/xfs/2009-02/msg00058.html) a similar problem with Intrepid (2.6.27-9-server) is discussed.
 * nfs-kernel-server bug #181996 seemed related before I discovered the filesystem corruption

----

syslog on new nfs-kernel host:

Feb 6 17:53:49 srv-twin3-2 kernel: [2689800.846117] XFS mounting filesystem etherd/e5.1
Feb 6 17:53:49 srv-twin3-2 kernel: [2689800.927304] Ending clean XFS mount for filesystem: etherd/e5.1
Feb 6 17:55:29 srv-twin3-2 kernel: [2689901.054444] aoe: 001517766839 e5.1 v4010 has 2147483648 sectors
Feb 6 17:56:13 srv-twin3-2 kernel: [2689944.367616] XFS mounting filesystem etherd/e5.1
Feb 6 17:56:13 srv-twin3-2 kernel: [2689944.508963] Starting XFS recovery on filesystem: etherd/e5.1 (logdev: internal)
Feb 6 17:56:13 srv-twin3-2 kernel: [2689944.847588] XFS resetting qflags for filesystem etherd/e5.1
Feb 6 17:56:20 srv-twin3-2 kernel: [2689951.167148] Ending XFS recovery on filesystem: etherd/e5.1 (logdev: internal)
Feb 6 17:56:54 srv-twin3-2 kernel: [2689985.890915] Installing knfsd (copyright (C) 1996 <email address hidden>).
Feb 6 17:56:54 srv-twin3-2 kernel: [2689985.936999] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
Feb 6 17:56:54 srv-twin3-2 kernel: [2689985.937015] NFSD: starting 90-second grace period
Feb 6 18:00:01 srv-twin3-2 /USR/SBIN/CRON[25657]: (root) CMD ([ -x /usr/sbin/update-motd ] && /usr/sbin/update-motd 2>/dev/null)
Feb 6 18:01:46 srv-twin3-2 mountd[25626]: Caught signal 15, un-registering and exiting.
Feb 6 18:01:46 srv-twin3-2 kernel: [2690277.358093] nfsd: last server has exited, flushing export cache
Feb 6 18:01:49 srv-twin3-2 kernel: ...

Read more...

Revision history for this message
Brian J. Murrell (brian-interlinx) wrote : Re: [Bug 294259] Re: XFS internal error xfs_trans_cancel at line 1164 of file /build/buildd/linux-2.6.27/fs/xfs/xfs_trans.c

The two instances of this that I have experienced did not involve any
hardware failures of any sort.

The first instance was relatively painless as the filesystem was
relatively empty.

The second instance however took almost 24h to xfs_repair and I only did
the repair so that I could get the data off of XFS and on to something
more stable.

I have since given up on XFS as this kind of problem, it's longevity and
it's apparent "unimportantness" to Ubuntu has told me all I want to know
about XFS.

If this problem is in fact fixed in the XFS world and Ubuntu has simply
failed to pick that fix up, they are doing the XFS community a huge
disservice.

Revision history for this message
Ronald van Engelen (ronalde) wrote :

Final follow-up;

On Februari 9th, the filesystem became unmountable and unrepairable.

We copied the logical volume to an external harddisk for analysis by Kroll Ontrack. It took their engineers 4 weeks to finalize this; they told me that they had never experienced this much corruption before but managed to get roughly 80% of data back.

Of course this was big disaster for everyone (it hosted personal and shared files as well as mailboxes for 5500 users).

In spite of Brian's remarks I really think it was caused by the unique hardware-failure inside the server (somehow it killed power to all internal components). In my opinion, XFS is still rock solid (as long as there's some juice) although I'm developing a growing interest in ZFS.

Revision history for this message
Brian J. Murrell (brian-interlinx) wrote :

On Mon, 2009-03-23 at 08:19 +0000, Ronald van Engelen wrote:
>
> In spite of Brian's remarks I really think it was caused by the unique
> hardware-failure inside the server (somehow it killed power to all
> internal components). In my opinion, XFS is still rock solid (as long as
> there's some juice) although I'm developing a growing interest in ZFS.

I have to strongly disagree. In my cases (yes, multiple) of corruption,
there absolutely was no loss of power or other environmental factors
involved. XFS simply pooched.

Revision history for this message
Sewi (sebastian-willing) wrote :

This bug is known and already solved - but not for Ubuntu:

> On the kernel side the big excitement in January was an in-memory corruption introduced in the btree refactoring which hit people
> running 32bit platforms without support for large block devices. This issue was fixed and pushed to the 2.6.29 development tree after a
> long collaborative debugging effort at linux.conf.au. Besides that about a dozen minor fixes were pushed to 2.6.29 and the first batch of
> misc patches for the 2.6.30 release cycle was sent out.

from: http://xfs.org/index.php/XFS_Status_Updates#XFS_status_update_for_January_2009

To fix this, Upgrade to at least 2.6.29 - but Ubuntu 8.04 Server offers 2.6.24 as the latest.

Unfortunatly, my local server which has the problem creates this error as soon as I run "dpkg --configure -a" - for upgrading to at least 2.6.24. :-(

Revision history for this message
kernel-janitor (kernel-janitor) wrote :

Hi Brian,

Please be sure to confirm this issue exists with the latest development release of Ubuntu. ISO CD images are available from http://cdimage.ubuntu.com/releases/karmic . If the issue remains, please run the following command from a Terminal (Applications->Accessories->Terminal). It will automatically gather and attach updated debug information to this report.

apport-collect -p linux 294259

Also, if you could test the latest upstream kernel available that would be great. It will allow additional upstream developers to examine the issue. Refer to https://wiki.ubuntu.com/KernelMainlineBuilds . Once you've tested the upstream kernel, please remove the 'needs-upstream-testing' tag. This can be done by clicking on the yellow pencil icon next to the tag located at the bottom of the bug description and deleting the 'needs-upstream-testing' text. Please let us know your results.

Thanks in advance.

[This is an automated message. Apologies if it has reached you inappropriately; please just reply to this message indicating so.]

tags: added: needs-kernel-logs
tags: added: needs-upstream-testing
tags: added: kj-triage
Changed in linux (Ubuntu):
status: New → Incomplete
Revision history for this message
Brian J. Murrell (brian-interlinx) wrote :

On Wed, 2009-09-02 at 00:22 +0000, kernel-janitor wrote:
> Hi Brian,

Hi.

> Please be sure to confirm this issue exists with the latest development
> release of Ubuntu.

No thanks. I will not risk another byte of data or any more of my time
to XFS thank you very much.

> ISO CD images are available from
> http://cdimage.ubuntu.com/releases/karmic . If the issue remains,

It's not reproducible at will, so without putting some expendable but
frequently changing data (no such thing) on XFS, I cannot be sure. But
TBH, I don't really care any more. See above about me and XFS.

I guess you can just close this unless somebody else is willing to be a
guinea pig.

Sorry to be so un-helpful, but I did file this bug a long, long time ago
and there was no help for it then, when I was willing to help with it.
Too much time has passed and interest in XFS has severely waned,
especially since ext4 has hit the streets.

Revision history for this message
Leann Ogasawara (leannogasawara) wrote :

Thanks for the update Brian. I'll just go ahead and close this out per your previous comment.

Changed in linux (Ubuntu):
status: Incomplete → Won't Fix
Revision history for this message
Sewi (sebastian-willing) wrote :

Leann,

please look at the report. Comment #7 written on 2009-06-03 clearly indentifies the problem and has a link explaining details.

This bug is not reproducable. I got it on a high-usage-server after more than one year of usage and on a another box which isn't used heavily after few month. Another box which was used by a customer crashed after two weeks and we had a very hard time to recover the data.

It happens whenever the XFS free blocks table gets a corruption. If you're lucky, the corruption isn't written to disk before the server crashes. Otherwise, the crash occurs every time the system requests free blocks and jumps into the corrupted area. (I'm neither a kernel not a xfs guy, so please don't slap me if this is not 100% correct.)

This is clearly not "no-one-would-like-to-test-it-so-we-ignore-and-close-it" - bug, it's a really critical thing, but it has been finally resolved month ago. It should be easy to finally solve this issue on Ubuntu, too.

All of us are currently lucky as long as you either get a power-outage or crash and ext3 "recovers" your filesystem bei killing all files or you're crash-safe but may run into the xfs-bug at any time.

Best Regards,
Sebastian

Revision history for this message
Oleg K (okobyzev) wrote : apport-collect data

.etc.asound.conf:
 pcm.!default {
      type plug
      slave.pcm "hdmi"
   }
Architecture: amd64
ArecordDevices:
 **** List of CAPTURE Hardware Devices ****
 card 0: NVidia [HDA NVidia], device 0: ALC662 rev1 Analog [ALC662 rev1 Analog]
   Subdevices: 1/1
   Subdevice #0: subdevice #0
AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/dsp', '/dev/snd/pcmC0D0p', '/dev/snd/hwC0D3', '/dev/snd/pcmC0D1p', '/dev/snd/by-path', '/dev/snd/controlC0', '/dev/snd/pcmC0D0c', '/dev/snd/pcmC0D3p', '/dev/snd/hwC0D0', '/dev/snd/seq', '/dev/snd/timer', '/dev/sequencer', '/dev/sequencer2'] failed with exit code 1:
CRDA: Error: [Errno 2] No such file or directory
Card0.Amixer.info:
 Card hw:0 'NVidia'/'HDA NVidia at 0xfae78000 irq 22'
   Mixer name : 'Nvidia MCP7A HDMI'
   Components : 'HDA:10ec0662,19daa108,00100101 HDA:10de0007,10de0101,00100100'
   Controls : 28
   Simple ctrls : 15
DistroRelease: Ubuntu 9.10
HibernationDevice: RESUME=UUID=0418c18d-3c96-41fe-93f5-b8251e5c922e
MachineType: To Be Filled By O.E.M. To Be Filled By O.E.M.
NonfreeKernelModules: nvidia
Package: linux (not installed)
ProcCmdLine: root=UUID=04293d21-dd45-4f10-bf8a-ca665aeededc ro quiet splash
ProcEnviron:
 SHELL=/bin/bash
 LANG=en_US.UTF-8
ProcVersionSignature: Ubuntu 2.6.31-20.58-generic
RelatedPackageVersions:
 linux-backports-modules-2.6.31-20-generic 2.6.31-20.22
 linux-firmware 1.26
RfKill:
 0: phy0: Wireless LAN
  Soft blocked: no
  Hard blocked: no
Uname: Linux 2.6.31-20-generic x86_64
UserGroups:

WpaSupplicantLog:

dmi.bios.date: 08/13/2009
dmi.bios.vendor: American Megatrends Inc.
dmi.bios.version: 080015
dmi.board.asset.tag: To Be Filled By O.E.M.
dmi.board.name: To be filled by O.E.M.
dmi.board.vendor: To be filled by O.E.M.
dmi.board.version: To be filled by O.E.M.
dmi.chassis.asset.tag: To Be Filled By O.E.M.
dmi.chassis.type: 3
dmi.chassis.vendor: To Be Filled By O.E.M.
dmi.chassis.version: To Be Filled By O.E.M.
dmi.modalias: dmi:bvnAmericanMegatrendsInc.:bvr080015:bd08/13/2009:svnToBeFilledByO.E.M.:pnToBeFilledByO.E.M.:pvrToBeFilledByO.E.M.:rvnTobefilledbyO.E.M.:rnTobefilledbyO.E.M.:rvrTobefilledbyO.E.M.:cvnToBeFilledByO.E.M.:ct3:cvrToBeFilledByO.E.M.:
dmi.product.name: To Be Filled By O.E.M.
dmi.product.version: To Be Filled By O.E.M.
dmi.sys.vendor: To Be Filled By O.E.M.

Revision history for this message
Oleg K (okobyzev) wrote : AlsaDevices.txt
Revision history for this message
Oleg K (okobyzev) wrote : AplayDevices.txt
Revision history for this message
Oleg K (okobyzev) wrote : BootDmesg.txt
Revision history for this message
Oleg K (okobyzev) wrote : Card0.Amixer.values.txt
Revision history for this message
Oleg K (okobyzev) wrote : Card0.Codecs.codec.0.txt
Revision history for this message
Oleg K (okobyzev) wrote : Card0.Codecs.codec.3.txt
Revision history for this message
Oleg K (okobyzev) wrote : CurrentDmesg.txt
Revision history for this message
Oleg K (okobyzev) wrote : IwConfig.txt
Revision history for this message
Oleg K (okobyzev) wrote : Lspci.txt
Revision history for this message
Oleg K (okobyzev) wrote : Lsusb.txt
Revision history for this message
Oleg K (okobyzev) wrote : PciMultimedia.txt
Revision history for this message
Oleg K (okobyzev) wrote : ProcCpuinfo.txt
Revision history for this message
Oleg K (okobyzev) wrote : ProcInterrupts.txt
Revision history for this message
Oleg K (okobyzev) wrote : ProcModules.txt
Revision history for this message
Oleg K (okobyzev) wrote : UdevDb.txt
Revision history for this message
Oleg K (okobyzev) wrote : UdevLog.txt
Revision history for this message
Oleg K (okobyzev) wrote : WifiSyslog.gz
tags: added: apport-collected
Revision history for this message
Alex Muntada (alex.muntada) wrote :

Today, our IMAP server has been struck by this bug.

We're still running hardy and i'm aware that it is not supported since May 2013 but it's sad that this problem was not addressed before hardy EOL. Maybe it wasn't an easy task to backport the Jan 2009 fix mentioned in #7 to hardy kernel, too bad anyone tried to.

Data corruption is always bad, very bad business :(

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.