Just wanted to add my two cents, since I'm experiencing this problem for a very long time now on various machines. But I just adopted myself by doing nothing on the OS when I have large file copies. But somehow I stumbelled upon a solution for this, maybe. I had this problems, the one that you are talking about in this bug and some others after I started using MD-raid. First I thought it was something with the IO-scheduler. Tried all schedulers there are, No-op, CFQ, Deadline, Anticipatory... Some helped a little bit some didn't. Then I thought it was something with the FS, tried ext2, ext3, XFS and now ext4. The same problem prevailed. When I started copying large files I had OS "hickups". Everything that had to do some disk work stopped. Music, and OpenGL where still functioning normal, only the responsivness of the system was gone for 1 or 2 secs. No browsing, no changing terminal windows. Then I thought that it had something to do with SWAP, too. A few days ago I got meself a new machine, i7/950, 2 x SATA3 WD HD, 12GB of ram, and I installed myself a new OS, pure64 bit kernel 3.6.36. The thing I had to do was to copy my old data to the new disks, and reuse the old disks. Now the way I did it is very important. I took a 1 TB WD HD Sata3, made some partitions (6 to be exact) and compiled a new OS. Then I copied the old data from the old raid. The old raid was 4 partitions on each disk with MD RAID 1 on two part. each. While I copied the data I had this hickups also, with the new system. I had this idea, since now it is possible to make partitioned raid with MD, and you can take whole disks for an array, to make a RAID 10 out of this four disks, 2 new ones and 2 old ones. So it was like "mdadm --create /dev/md0 ... --raid-devices=4 /dev/sda /dev/sdb..." Worked like a charm. Then I partitioned the array "fdisk /dev/md0". No problem there. Then I copied the old stuff from the single hard, with 6 part, to the new array. Now here is the interesting bit. No hickups !!!. Throughput was around 120MB/s and the OS was working smoothly as a Babies bump. And it was the same OS, no changes at all regarding kernel compile, or something else. Reading throughput was 270MB/s (dd-test). But since rootfs won't work on a partitioned MD array (some kernel racing problem, but that's another story) I had to change my setup on the new HDs. So again I created 4 normal partitions on each disk, one from all HD's for bootfs RAID 1, another 4 for swap, another 4 for the rootfs also RAID1 and the last four ones for RAID10 which I partitioned into two seperate partitions (srv and home). And the hickups came back. So this isn't hardware related. Because this problem I can reproduce on many Hardware. A list will follow. It's not with file system or such because I used them all. It's not SWAP, because on this new machine it didn't start to swap while I was copying. But this problem always comes up when I make more partitions (normal ones) for md-raid. The list of Hardware: Quad-Core 6600, I think it was ICH7 chipset, 8GB Ram, 2 x WD10EARS I think the kernel was 2.6.20 something, 32bit system, LinuxFromScratch 6.1 or 2. Can't remember. The system worked for three yrs to now. The partition of the disks was sda1,sda2,sda3,sda4,sdb1,sdb2,sdb3,sdb4 The raid arrays were md0 -> (sda1,sdb1); ... ; md4 -> (sda4,sdb4) md0 -> /boot md1 -> swap md2 -> / md3 -> /srv Fujtisu Siemens RX100S6 x 2 1x XEON 3220 (Quad), 4GB memory, and I can't remember the chipset. 1x XEON E3110 (Dual), 4GB Ram, still can't remember the chipset. kernel 2.6.32.10 pure 64bit system, LFS 6.5 And now: i7 950, 12GB Ram, ICH 10 chipset, 2 x WD10EARS, 2 x WD1002FAEX (+1 temporary) kernel 2.6.36, pure 64bit, LFS6.7 The setup that worked sda1,sda2,sda3,sda4,sda5,sda6; sdb,sdc,sdd,sde md0 -> (sdb,sdc,sdd,sde) RAID10 md0p1 -> boot (tried it but grub couldn't do it) md0p2 -> Swap (no problem there) md0p3 -> / (tried it after a workaround for grub to boot from RAID10, but the kernel didn't want to play along) md0p4 -> extended part. md0p5 -> /home (no problem there) md0p6 -> /srv (no problem there) sda1 -> /boot sda2 -> swap sda3 -> / sda5 -> /home sda6 -> /srv Unfortunatly I had to dump this setup because of a Race condition where the kernel can't put partitioned md together before the rootfs boot process starts. :-( Now the setup that doesn't work (the one with the hickups) sda1,sda2,sda3,sda4,sdb1,sdb2,sdb3,sdb4,sdc1,sdc2,sdc3,sdc4,sdd1,sdd2,sdd3,sdd4 md0 -> (sda1,sdb1,sdc1,sdd1) RAID 1 -> /boot swap -> (sda2,sdb2,sdc2,sdd2), didn't know what else to do with the free space md1 (which somehow changed to md126 automagically after the third boot) -> (sda3,sdb3,sdc3,sdd3) RAID 1 -> / md2 (which somehow changed to md127 automagically after the third boot) -> (sda4,sdb4,sdc4,sdd4) RAID 10 -> md2p1 (changed to md127p1) -> /home md2p2 (changed to md127p2) -> /srv and the temporary disk which used the sda segment until I copied everything to the new setup. Just to mention that throughput is still ok, around 80MB/s write. Didn't try read yet. Except for those hickups. So, what else do you need from me so that we can kill this pesting bug?? I can do everything that is not going to kill my system, cause I'm using it for everyday work. Everything else, torture tests, and so on after working hours is ok. Oh, and yes tried the /sys/block/sdX/device/queue_depth thingie, worked for 5 mins and then it was back to hickuping. dd is around 120MB/s...