Something else to try; I would suggest using "dstat" with the -D option (i.e, dstat -D sda,sdb) or iostat 1 (although I find dstat to be a bit more user friendly) to measure I/O transfer rates. I would recommend using "dd if=/dev/zero of=/media/disk/test.img bs=32k" to measure the transfer rates, and I would *not* recommend using a flash based device because they can show a lot of variance all by themselves. So please, use some kind of HDD. Speaking personally, I've been doing a rather huge amounts of copying of files back and forth between disks, using ext4, in the 30-40 gigabyte range, and I've not noticed anything like this. This could be filesystem related, or related to the page writeback algorithms, and so something else to try would be to try dd'ing to a raw disk, and see if you see the slowdown. For me, using a 2.6.30-rc5 and the ext4 filesystem, I've done many, *many* copies of data using "rsync -axH" to copy over entire filesystems (for doing things like testing fsck speedups between ext4 and ext3) and I've noticed a problem. I normally have a window open running "dstat -D sda,sdb", so I can get dynamic output like this once a second: ----total-cpu-usage---- --dsk/sdb-----dsk/sdc-- -net/total- ---paging-- ---system-- usr sys idl wai hiq siq| read writ: read writ| recv send| in out | int csw 19 26 42 11 0 2| 0 16M: 15M 0 | 0 0 | 0 0 |2081 9176 20 24 41 13 0 2| 0 16M: 15M 0 | 0 0 | 0 0 |2092 9287 18 24 42 14 0 1| 0 12M: 14M 0 | 0 0 | 0 0 |2003 8987 Note that filesystem activity can make a huge difference. When copying files around, there's enough seeks going on that in practice I can only read or write 16 megabytes/second. If I do a dd if=/dev/zero of=/scratch/zero.img bs=32k", I get this: ----total-cpu-usage---- --dsk/sdb-- -net/total- ---paging-- ---system-- usr sys idl wai hiq siq| read writ| recv send| in out | int csw 3 10 57 28 0 1| 0 57M| 0 0 | 0 0 | 746 1268 4 27 43 25 0 2|4096B 58M| 0 0 | 0 0 |1065 951 4 21 44 30 0 1| 0 60M| 0 0 | 0 0 | 701 396 5 23 50 20 0 2| 0 61M| 0 0 | 0 0 | 718 288 Here's what I get if I do "dd if=/dev/zero of=/dev/closure/scratch bs=32k" ----total-cpu-usage---- --dsk/sdb-- -net/total- ---paging-- ---system-- usr sys idl wai hiq siq| read writ| recv send| in out | int csw 11 13 47 27 0 1| 0 61M| 0 0 | 0 0 | 740 1327 3 15 49 32 0 1| 0 61M| 0 0 | 0 0 | 522 249 2 10 50 37 0 1| 0 61M| 0 0 | 0 0 | 476 246 6 14 49 30 0 1| 0 61M| 0 0 | 0 0 | 544 260 3 12 49 35 0 1| 0 61M| 60B 60B| 0 0 | 558 260 And here's what I get if I format /dev/closure/scratch as ext3 instead of ext4, and then do "dd if=/dev/zero of=/scratch/zero.img bs=32k": usr sys idl wai hiq siq| read writ| recv send| in out | int csw 5 19 17 57 0 1|4096B 47M| 0 0 | 0 0 | 637 503 6 15 0 77 0 1| 0 52M| 0 0 | 0 0 | 608 611 5 1 0 94 0 1| 0 51M| 0 0 | 0 0 | 385 450 So when people complain about slow disk performance, you really need to control variables. All I can say is that on a Thinkpad X61s, using SATA disks, USB disks, and SATA disks over a SATA/PATA/SATA bridge (horrific hardware kludge in when you use disks in the Ultrabay slot --- don't ask) I've not seen the problem are complaining about. Note that if you are doing copies when there are other programs going on in the background, including programs like firefox triggering off fsync()'s this can also affect performance numbers. So what's really needed are people who are willing to spend a lot of time drilling down into why the performance is dropping, and by controlling variables; I recommend using a scratch filesystems and scratch disks that you don't mind reformatting so you can do multiple repeatable experiments, and to do it on a system that isn't running anything else in the background.