I think I found another possible fix, Totally simple and userspace too. I have a Seagate FreeAgent 500gb and I beleive I am having the "drive turns off" problem and not the "can't handle much traffic" problem. BTW it seems to be common to all linux not just ubuntu. I have this problem on suse 10.2. I just found this possible solution that uses sdparm to turn off the spindown feature. (it's one of the posts in the middle, search for sdparm) http://www.newegg.com/Product/ProductReview.aspx?Item=N82E16822148235&SortField=0&SummaryType=0&Pagesize=10&SelectedRating=-1&Page=4 The poster's cut&paste got munged by the web site (html consolidating all whitespace) But this is what he said, unwrapped: Ensure you have the sdparm utility installed, then, as root, do: # sdparm -al /dev/sdd /dev/sdd: Seagate FreeAgentDesktop 100D Direct access device specific parameters: WP=0 DPOFUA=0 Power condition [po] mode page: IDLE 0 [cha: n, def: 0, sav: 0] Idle timer active STANDBY 1 [cha: y, def: 1, sav: 1] Standby timer active ICT 0 [cha: n, def: 0, sav: 0] Idle condition timer (100 ms) SCT 9000 [cha: y, def:9000, sav:9000] Standby condition timer (100 ms) # sdparm --clear STANDBY -6 /dev/sdd /dev/sdd: Seagate FreeAgentDesktop 100D # sdparm -al /dev/sdd /dev/sdd: Seagate FreeAgentDesktop 100D Direct access device specific parameters: WP=0 DPOFUA=0 Power condition [po] mode page: IDLE 0 [cha: n, def: 0, sav: 0] Idle timer active STANDBY 0 [cha: n, def: 1, sav You have to know which device your drive will be (/dev/sdc, /dev/sdd, etc...) so I don't know how to make it automatic except if you just never unplug it or move it or do anything else that would cause it to get assigned a different device name. That matters because of something else I found myself. At least in my case, I do not have to reboot to make the drive work again. I do have to disconnect the usb cable wait a few seconds and the reconnect. The drive comes back alive, but on a new device node. When it's plugged in at boot, it gets /dev/sdc. When I disconnect/reconnect, it gets /dev/sdh. When the drive locks up, it is not possible to umount the filesystem or do anything to that device or that filesystem. After disconnect/reconnect it is possible to umount the filesystem and then mount again on the new device. It will take a while before I can say if it solves the problem for me or not but figured I'd relay it right away since it's something that hasn't been mentioned in this thread yet and some people still have the problem after all the proposals that have been mentioned so far. Another poster later in that thread on newegg claims this worked for him too. Also a question about the udev rules, Am I really reading those right that they basically say "if you detect a device by vendor foo on scsi, then do this to it" Isn't that assuming a lot? In my case, vendor is Seagate, so, what happens on one of my servers that has 10 or 12 seagate scsi disks? (yes I actually have several of those, and yes I actually was going to use this usb enclosure to move large hunks of data around once in a while, though luckily I never chanced to do that before encountering this problem at home where it didn't matter - phew!) I still need to try one of the fixes proposed here (since I only found this thread today). It seems to me, if possible I would rather have this drive spin down like it wants to, and have the kernel know enough to wait for it, rather than force the drive to run 100%. Since this drive is in fact idle 99% of the time. So the allow_restart sounds like the better answer if it works. Testing ensues. Report follows. Thanks for all the clues and possible answers people have dropped here for me to find.