Settings in /etc/hdparm.conf are not applied during boot

Bug #595138 reported by Maxim Tikhonov on 2010-06-16
200
This bug affects 41 people
Affects Status Importance Assigned to Milestone
hdparm (Ubuntu)
Undecided
Unassigned

Bug Description

Binary package hint: hdparm

After upgrade to Lucid the hard drives never spindown. The hdparm spindown settings (and most likely others) in /etc/hdparm.conf are not being applied during boot.

This seems to have happened after removing the init.d script.

lsb_release -rd
Description: Ubuntu 10.04 LTS
Release: 10.04

apt-cache policy hdparm
hdparm:
  Installed: 9.15-1ubuntu9
  Candidate: 9.15-1ubuntu9
  Version table:
 *** 9.15-1ubuntu9 0
        500 http://archive.ubuntu.com/ubuntu/ lucid/main Packages
        100 /var/lib/dpkg/status

cat /etc/hdparm.conf
/dev/sda {
    spindown_time = 240
}
/dev/sdb {
    spindown_time = 240
}
/dev/sdc {
    spindown_time = 240
}
/dev/sdd {
    spindown_time = 240
}
/dev/sde {
    spindown_time = 240
}
/dev/sdf {
    spindown_time = 240
}
/dev/sdg {
    spindown_time = 240
}

Maxim Tikhonov (tikhonov) wrote :

Some of my comments regarding the issue from #568120.

It looks like the stuff from "/etc/init.d/hdparm" was moved into "/lib/udev/hdparm" and "/lib/hdparm/hdparm-functions". Which is fine, as /etc/hdparm.conf should still work.

The udev rule in "/lib/udev/rules.d/85-hdparm.rules" calls "/lib/udev/hdparm" when udev "add" event is triggered for new drives, which in turn should set spindown intervals or any other settings you specified in "/etc/hdparm.conf" for your drives.

If I run "sudo udevadm trigger", which triggers udev "add" events for all devices, the settings do get applied just fine.

However if I reboot, then the settings do not get applied during boot.

So it seems the functionality in the new scripts works as it should, its just that they are not triggered during boot.

I added some "echo" lines to "/lib/udev/hdparm" just to debug, and when I manually trigger "add" events, my debug logs are created. After reboot no debug logs are created.

So I am guessing that either the rules script does not get triggered during boot, or may be it is triggered to early, which causes it to fail (e.g. file system not mounted etc).

Maxim Tikhonov (tikhonov) wrote :

As a temporary workaround for disk spindown I added the following to "/etc/rc.local".

# workaround for hdparm disk spindown in 10.04
(echo "\n$(date)" && for devname in $(cat /etc/hdparm.conf | grep -o "/dev/sd[a-z]"); do export DEVNAME="$devname" && (/lib/udev/hdparm || true); done) >> /var/log/hdparm_fix.log

This looks into "/etc/hdparm.conf", gets all "sd*" device names and then runs "/lib/udev/hdparm" script in the same way the "/lib/udev/rules.d/85-hdparm.rules" rule does. This helps to set the settings I have in "/etc/hdparm.conf" during boot. For new devices plugged into the system after boot, the standard rule should work just fine. In case if anybody uses it, check the "/var/log/hdparm_fix.log" after boot to make sure it set the settings you have in your config.

description: updated

Adding "udevadm trigger" to /etc/rc.local spins down my harddisks. Thanks for pointing that out.
It does make me wonder why you chose a more complex workaround in #2.

Maxim Tikhonov (tikhonov) wrote :

I think I used that longer workaround, because "udevadm trigger" causes add events to be called for all devices. My workaround only sets settings for hard drives.

I didn't want to call "udevadm trigger" because I didn't want to spend time investigating if it would affect anything else in the system (this is kind of a production box).

Any chance someone can look into this issue, it's been going on for months now and I'd really like my raid array disks to spin down please. It's kind of basic fundamental functionality...even windows can do it correctly....It’s these kinds of problems that really let Linux down, and it’s a real shame.

Maxim Tikhonov (tikhonov) wrote :

+1 to post above, I can't understand why this won't be addressed or even why not many users notice this fault.

I could attempt fixing if somebody pointed me in the right direction (since I currently don't have time to learn the inner workings of the whole hdparm+udev universe), but for the time being I am ok with the workaround.

can only agree with above posters. My workaround was to revert back to 9.10. Still I agree it would be nice if this gets sorted properly. It does not help users to post bugs if it seems like nothing gets done with it.

Steffen Barszus (steffenbpunkt) wrote :

Yes definitly a bug - just discovered it by accident ...

The idea to use udev for this is quite nice, but has a design flaw. by definition this is doomed to failure IMHO. At the time of add sd? execute something which needs filesystem access on possibly the same drive (right before mount) - not sure that would ever work. (Can someone proof me wrong - at least for /etc/hdparm.conf you might get problems ?)

Upstart Job like this could help - not sure on suspend to ram case ?

start on filesystem

script
for DISK in $(find /dev/[hs]d?) ; do
DEVNAME=$DISK
export DEVNAME
/lib/udev/hdparm
done
end script

Beside that i think, if it is set successfully the spindown still does not happen, still trying to find out what happens. Somebody can confirm this or is it just me ? (Well i suppose i have to open another bug, not before i dont know what happens)

Please let me know ...

Maxim Tikhonov (tikhonov) wrote :

I just installed Ubuntu Server 10.10 on a new machine last week. Changed /etc/hdparam.conf and it seems to take effect (drives did spin down).

So did someone fix this?

I haven't tried my other installs of Kubuntu, Mythbuntu yet, so I can't comment.

Maxim Tikhonov (tikhonov) wrote :

btw I meant clean install of 10.10

davet218 (davet218) wrote :

why has this issue not been resolved in 10.04 and only 10.10 i have programs that only run on 10.04

Bob Harvey (bobharvey) wrote :

I'm running 10.04 LTS server edition, and this is affecting me. I can spin down by hand with
hdparm -y /dev/sd{a,b,c,d}
but the contents of /etc/hdparm.conf appear to be ignored at boot time.

Maxim Tikhonov (tikhonov) wrote :

The problem is not in your configuration file.

Use a workaround posted above.

I think this is fixed in later version - 10.10 and above.

Bob Harvey (bobharvey) wrote :

I've noticed that the timeouts set with hdparm -S from the command line are not respected (still running 10.04 LTS) but I can use hdparm -Y just fine.

Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in hdparm (Ubuntu):
status: New → Confirmed
Graeme Hewson (ghewson) wrote :

Problem is still present in 12.04 Beta 2.

tags: added: precise
Graeme Hewson (ghewson) wrote :

Apologies, the problem is not present in Precise, and hdparm is working fine. (smartd was spinning up my disk.)

I suggest this bug be closed.

This isn't exactly the right place to say it, but hdparm should be extended to provide more ways of specifying devices in hdparm.conf other than the /dev path, so disk label, disk serial number, physical path, etc., could be used. Obviously putting the hdparm script in /lib/udev is a step towards this.

For instance, I have two internal disks and a hot-swap caddy. Normally the internal disks are /dev/sda and /dev/sdb, but if there's a disk in the caddy when I boot, this becomes /dev/sdb and one of the internal disks becomes /dev/sdc.

b3nmore (b3nmore) wrote :

"This isn't exactly the right place to say it, but hdparm should be extended to provide more ways of specifying devices in hdparm.conf other than the /dev path, so disk label, disk serial number, physical path, etc., could be used."

Try the patch from https://bugs.launchpad.net/ubuntu/+source/hdparm/+bug/799312 .

Bob Harvey (bobharvey) wrote :

Hdparm.conf is not being respected in my new install of 12.04 server.

bug 875719 seems to be a duplicate, with 3 sufferers including me.

Thanks Bob for pointing me to this bug! On 2011-10-16 I filed a duplicate of this bug: https://bugs.launchpad.net/bugs/875719. Apologise for that. Here's a summary of my observation:

hdparm -S worked with 10.10. But with 11.04 my external hard disk (Seagate) connected via eSATA ExpressCard on a ThinkPad X200s doesn't spin down anymore on inactivity timeout set to 12 (1 minute). I'm still able to spin down the disk with the -Y option---but a few seconds later it spins up again. hdparm with -S set to 1 works, too. But the disks spins up again after 15 seconds. I guess some process accessing the disk prevents the disk to spin down which only occurs with -S set up high enough. How can I detect which process prevents the disk from spinning down? This hard disk is my backup disk and only in use every 4 hours by a cron job.

Are you guys sure that it is a problem with hdparm and not with the apm configuration?

Have you tried to enable it in hdparm.conf?

apm = 255

On 29-6-2012 10:57 AM, Antonino Catinello wrote:
> Are you guys sure that it is a problem with hdparm and not with the apm
> configuration?
>
> Have you tried to enable it in hdparm.conf?
>
> apm = 255
>
also look at the B parameter of the disks with hdparm.

manpage:
  -B Get/set Advanced Power Management feature, if the drive
supports it. A low value means aggressive power management and a high
value means better
               performance. Possible settings range from values 1
through 127 (which permit spin-down), and values 128 through 254 (which
do not permit spin-
               down). The highest degree of power management is
attained with a setting of 1, and the highest I/O performance with a
setting of 254. A value
               of 255 tells hdparm to disable Advanced Power Management
altogether on the drive (not all drives support disabling it, but most do).

Due to this bug I have moved my server to debian 6, and all over sudden
I found the value of B needed to be set to 128 to get spin down to work
correctly, although the manpages say above 127 does not allow spindown.
If set to 127 my disks spindown but spinup instantly, and with B set to
128 all is working fine in debian, so it might help in Ubuntu too?

Bob Harvey (bobharvey) wrote :

Thanks Oscar.
I just tried apm = 255, 254, 128, 127, 1 and it made no difference.
At boot time the disks are not spun down as defined in hdparm.conf, but I can do so from the command line

I also found this bug which says the man page is wrong:
https://bugs.launchpad.net/ubuntu/+source/hdparm/+bug/172287
which 'buntu seem to have closed without doing any investigation at all.

Liam Tuvey (0c-liam-wz) wrote :

Can confirm bug still exists in Ubuntu 12.04.1 LTS, spindown_time in hdparm.conf is not respected.
I can spin down drives from command line using hdparm.

_dan_ (dan-void) wrote :

still exists in 12.10

jdblair (jdb) wrote :

I can confirm that the contents of /etc/hdparm.conf are not affecting the drive settings in 12.04.1 LTS.

For performance reasons I need to turn off the write-cache for my SSD. I set the correct setting in /etc/hdparm.conf, but the write-cache is still on when I boot. Further, when I manually turn the write-cache off (hdparm -W0 /dev/sda), the write-cache is turned back on when the system wakes from sleep.

Eric_DL (edelare) wrote :

Problem detected on Ubuntu Studio x64 12.10 and reproduced under Studio x64 10.10 (both OSes running on same machine - two different hard drives).

Slightly the same situation as jdblair above (need to disable write cache on a disk at boot/resume time), except two differences :
    - classic hard drive, not an SSD
    - that drive is the boot drive

Added that new drive to be the target boot/root disk for 12.10 fresh install => turns out that kernel is unable to flush that disk's write cache when trying to shutdown, reboot or suspend (machine refuses to suspend and hangs on reboot/shutdown).
Bottom line : without disabling write caching on the boot disk at boot time, Ubuntu 12.10 is unusable.

Since hardware enumeration during boot is now completely dynamic, the different hard drives in the machine don't get the same /dev pointer each time, hence the UUID and bulk ID symlinks. So I added the following directive at the end of /etc/hdparm.conf :

    /dev/disk/by-id/ata-VB0250EAVER_9VMTTK1M {
        write_cache = off
    }

But it never seems to be executed at boot time, as you can see below :

    root@bigboy:/etc# hdparm -W /dev/disk/by-id/ata-VB0250EAVER_9VMTTK1M

    /dev/disk/by-id/ata-VB0250EAVER_9VMTTK1M:
    write-caching = 1 (on)

I also commented the "quiet" directive at the beginning of /etc/hdparm.conf, to have some log of what's going on, but to no avail.

Am I missing something ?

Gerry (gsker) wrote :

It appears to be fixed in 13.04. Apply the patch above.

Aside: You don't actually need to "patch". You can just edit the /lib/hdparm/hdparm-functions file. It's a 7 line change. Find the line that says DISC=$KEY and wrap it in an "if"

https://launchpadlibrarian.net/73763550/hdparm-functions.patch

Eric_DL (edelare) wrote :

Reading again the first posts from this thread, seems I made a mistake posting my comment here.

Maxim Tikhonov could get udev to trigger /lib/udev/hdparm manually, but not during boot sequence.

My problem was actually that of bug #222458 : a bug in hdparm-functions that prevented it to "recognize" hard drives described by a symlink (e.g. /dev/disk/by-id/... ) instead of their device file (/dev/sd*) in /etc/hdparm.conf.

Thanks Gerry for pointing me to the solution.
This bug has indeed been fixed in 13.04 (hdparm_9.43), but not back-ported to 12.10 so I modified hdparm-functions manually and it works :-)

BTW, I've just realized that :
    - bug #222458 is dating back from April 2008
    - the patch you sent the link to, is dating back from June 2011
    - it took 2 more years for the fix to be released with 13.04

That makes a total of 5 years for this bug to be fixed !! :-/

I can confirm that in 13.04 settings from /etc/hdparm.conf are indeed applied on boot from S5 state.

However settings are still not applied when booting from S3 or S4 state. I am wondering if I need to create a new bug report for this behaviour...

I did some research and it turned out my issue is a device specific bug for certain Western Digital drives. I have posted a bug report and a work around under pm-tools: https://bugs.launchpad.net/ubuntu/+source/pm-utils/+bug/1225169

Bob Harvey (bobharvey) wrote :

I have just updated to server edition 12.04 (because I imagine 14.04 will be along soon) and notice that the problem with not reading settings from /etc/hdparm.conf is still there. I guess whatever was done in 13.04 was not backported to the LTS version.

Shame, that's what I thought LTS was for. Looks like us server users will have to wait to the next LTS, which means for us comment 29 should read "6 years"!

Flittermice (flittermice) wrote :

Sorry, but I found this thread because I was searching for the reasons for my HDs not spinning down in Xubuntu 13.10.

João Assad (jfassad) wrote :

HDs not spinning down in 14.04.4

João Assad (jfassad) wrote :

in 14.04.4 HDs spin down after resume but not after boot.

Ken Sharp (kennybobs) wrote :

hdparm.conf is still ignored in Xenial, as hard as that is to believe!

tags: added: trusty xenial
Bartek Krol (ihaz) wrote :

I fixed this by using this patch (https://launchpadlibrarian.net/73763550/hdparm-functions.patch) that translates (unlinks) HDD path symlink inside /etc/hdparm.conf to /dev/sdX

but I also had to create this patch to the /lib/udev/hdparm, because I had to:
1) convert (readlink) DEVNAME input parameter to hdparm_options function in '/lib/hdparm/hdparm-functions' - this fixes issue when as input to /lib/udev/hdparm someone puts symlink
2) trim the trailing digits from the DEVNAME, because the udev rule puts disk device as well as partitions as input into the /lib/udev/hdparm

Example scenario I had:
1) udev rule runs /lib/udev/hdparm and passes /dev/sda1 as DEVNAME => outcome was that /dev/sda1 was not in /etc/hdparm.conf which made it apply DEFAULT settings
2) udev rule runs /lib/udev/hdparm and passes /dev/sda as DEVNAME => outcome was that /dev/sda was in /etc/hdparm.conf which made it apply CORRECT settings
3) udev rule runs /lib/udev/hdparm and passes /dev/sda2 as DEVNAME => outcome was that /dev/sda2 was not in /etc/hdparm.conf which made it apply DEFAULT settings

All in all, for me, between reboots, the order was random. Sometimes the disk came last and applied correct settings and sometimes partition came into play and was overwriting the /etc/hdparm.conf with the DEFAULTS

The attachment "hdparm.patch" seems to be a patch. If it isn't, please remove the "patch" flag from the attachment, remove the "patch" tag, and if you are a member of the ~ubuntu-reviewers, unsubscribe the team.

[This is an automated message performed by a Launchpad user owned by ~brian-murray, for any issues please contact him.]

tags: added: patch
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers