[SRU] [xenial] [bionic] autofs needs to be restarted to pick up some shares

Bug #40189 reported by Tony Middleton on 2006-04-19
This bug affects 43 people
Affects Status Importance Assigned to Milestone
autofs (Ubuntu)
Declined for Karmic by Mathias Gug
Declined for Maverick by David Britton
autofs5 (Ubuntu)
Declined for Karmic by Mathias Gug
Declined for Maverick by David Britton
upstart (Ubuntu)
Declined for Karmic by Mathias Gug
Declined for Maverick by David Britton

Bug Description

I am using autofs to access shares on a Windows XP machine from a Kubuntu AMD64 machine. The problems applies in both Breezy and Dapper.

EDIT: confirmed with similar configuration on Intrepid with a NetApp filer hosting NFS. Server OS removed from summary.

When I first try to access the mount point via cd or in Konqueror it does not exist. However, if I then restart autofs (/etc/init.d/autofs restart) everythin then works OK. My config files are:


# $Id: auto.master,v 1.3 2003/09/29 08:22:35 raven Exp $
# Sample auto.master file
# This is an automounter map and it has the following format
# key [ -mount-options-separated-by-comma ] location
# For details of the format look at autofs(5).
#/misc /etc/auto.misc --timeout=60
#/misc /etc/auto.misc
#/net /etc/auto.net

/petunia /etc/petunia.misc --timeout=60


# $Id: auto.misc,v 1.2 2003/09/29 08:22:35 raven Exp $
# This is an automounter map and it has the following format
# key [ -mount-options-separated-by-comma ] location
# Details may be found in the autofs(5) manpage

cd -fstype=iso9660,ro,nosuid,nodev :/dev/cdrom

tony -fstype=smbfs,defaults,password=xxx,fmask=777,dmask=777 ://
chris -fstype=smbfs,defaults,password=xxx,fmask=777,dmask=777 ://
shared -fstype=smbfs,defaults,password=xxx,fmask=777,dmask=777 ://
linuxbackups -fstype=smbfs,defaults,password=xxx,fmask=777,dmask=777 ://

Tony Middleton (ximera) wrote :

This is still a problem in feisty

Tony Middleton (ximera) wrote :

This may be the same as Debian bug #332717

Erik Lander (erik-lander) wrote :

I had a similar problem, I'm using autofs to mount a nfs share as my home directory at boot. I also had to restart autofs manualy every boot. I disabled NetworkManager like this 'chmod -x /usr/sbin/NetworkManager*' and now it works without having to restart. Though my problem seemed to be that my automount map is stored in a ldap database. =P

Can confirm this too. I'm also storing my automount map in LDAP. After a fresh boot autofs loads the map from LDAP but does not initialze any mount points:

Configured Mount Points:
/usr/sbin/automount --timeout=60 --ghost /data/nfs/home ldap ou=auto.home,ou=automount,dc=exampe,dc=com
/usr/sbin/automount --timeout=60 --ghost /data/nfs/shares ldap ou=auto.misc,ou=automount,dc=example,dc=com

Active Mount Points:

In the mean while I've got a workaround to share. It look likes we need to reload autofs with the network refresh of NetworkManager. Maybe these files can be added to the package as a bugfix. It is not me to decide that:

/etc/network/if-down.d/autofs and /etc/network/if-up.d/autofs:

[ "$IFACE" != "lo" ] || exit 0
/etc/init.d/autofs reload

Works for me

Michael Rickmann (mrickma) wrote :

Similar here with Gutsy (Edubuntu). I try to setup autofs with automount maps in LDAP and NetworkManager uninstalled. My guess is that autofs gets started before slapd, can not find the maps and decides nothing to do. A simple
update-rc.d -f autofs remove
update-rc.d autofs defaults 20
For me it works.
The changelog, however, states that autofs start was lowered to level 19 instead of level 20 (somewhen in 2005) because of other daemons relying on it.

Helge Stenström (h-stenstrom) wrote :

I have a similar problem. I mount /home and few other directories from NFS, with mount definitions taken from NIS. I have to manually /etc/init.d/autofs restart after each boot.
This is with Ubuntu 8.04, and I think the problem started with 7.10, perhaps with 7.04.

'chmod -x /usr/sbin/NetworkManager*' didn't seem to help.

Helge Stenström (h-stenstrom) wrote :

The solution by Sebastiaan Veldhuisen above works for me.

Tony Middleton (ximera) wrote :

This seems to be fixed in hardy - at least as far as my original problem with shares on a Window XP machine.

Daniel T Chen (crimsun) wrote :

Per submitter feedback. Please reopen if reproducible in 8.10 alpha.

Changed in autofs:
status: New → Fix Released
Helge Stenström (h-stenstrom) wrote :

Not fixed for me in 8.04.1, where NFS files are mounted with help from NIS. I will not reopen now, but will hopefully remember to test with 8.10 next month.

I reproduced this problem on a release installation of Ibex (8.10)
It's times like these that I'm glad I run FreeBSD on my servers. Ubuntu, a great desktop, lousy server.

Adam Katz (khopesh) wrote :

Reproduced on Ibex (8.10), none of the above proposed workarounds solved the issue.

The reason the above workarounds did not solve my issue was that my home directory was mounted via autofs, so something like a script run by NetworkManager, which requires a login, will not do the trick. I'm not sure why the /etc/network/*.d/ scripts didn't do it. I had to create an init script (attached).

Since the autofs-bump init script is a hack, I didn't know how to enter it with update-rc.d ... here's how to do it manually:

sudo -s
cp etc_init.d_autofs-bump /etc/init.d/autofs-bump
chmod 755 /etc/init.d/autofs-bump
cd /etc
for L in 2 3 4 5; do
  cd rc$L.d
  ln -s ../init.d/autofs-bump S99autofs-bump
  cd -

Adam Katz (khopesh) on 2009-02-12
Changed in autofs:
status: Fix Released → Incomplete
Adam Katz (khopesh) on 2009-02-12
description: updated

I also saw this problem while using Ubuntu 8.10 x86_64. I think I may have found the source of the problem.

I'm using this computer in a corporate environment, where each users home directory is mounted using autofs, and authentication is done using NIS.

The ssh daemon and automounting of the user's home directory failed to work after rebooting, but I found a workaround and a possible cause. This computer is using NetworkManager, which is loaded with /etc/rc2.d/S28NetworkManager. Autofs is loaded as /etc/rc2.d/S19autofs, and ssh as /etc/rc2.d/S19ssh, which are both loaded prior to NetworkManager.

The first thing I tried was to move autofs to S29, and ssh to S30 (both above NetworkManager). This should ensure that a network connection exists before starting the ssh and autofs daemons. However this did not work, for the following reason. If one runs '/etc/init.d/NetworkManager restart', this command will return several seconds before a network connection is established. The exit code it returns is 0, so there is no indication of an error.

As a test, I added the line 'sleep 15' near the end of /etc/init.d/NetworkManager (so this script would wait for 15 seconds before exiting), and rebooted the computer. Both the autofs and ssh daemons worked correctly after this.

I'm not proposing the NetworkManager script waits for 15 seconds, but instead it should wait until a network connection is either established or has failed, and return the appropriate exit code. Also, the NetworkManager init script should probably be placed before any other scripts that require a working network connection.

Gustavo Spadari (gspadari) wrote :

The solution by Sebastiaan Veldhuisen above works for me on one machine, but not on other. Both machines have 9.04. I don't know what could be the difference.

My autofs master map is in LDAP, thus I experience this issue with Jaunty 9.04 server - this is slightly different, in that NetworkManager isn't involved here.

autofs needs restarting before network mounts are available, due to incorrect starting order, as S20autofs is earlier than S35networking...thus the canonical workaround would be:

update-rc.d -f autofs remove
update-rc.d autofs defaults 36

however this doesn't resolve the issue, so the 'get out of jail' move is:

echo -e '#!/bin/bash\n/etc/init.d/autofs restart\n' >/etc/dhcp3/dhclient-exit-hooks.d/autofs
chmod 755 /etc/dhcp3/dhclient-exit-hooks.d/autofs

This is quite a troublesome bug - this just works on Redhat 5, so gives the user a bad experience with Ubuntu 9.04 server

Changed in autofs (Ubuntu):
status: Incomplete → Confirmed

This bug is still at large in Ubuntu 9.10, as observed on the desktop x86-64 variant.

This may not be reproducible with 'static' configurations where the automount tables are configured in files, but when they are specified in nsswitch.conf as 'automount: ldap', this fails to initialise - restarting the autofs service is needed.

If needed, let me know what area of detail is required to reproduce this.

Ken Arnold (kenneth-arnold) wrote :

Our site configuration stores auto.master in NFS for ease of updating. (Yes it should perhaps move to LDAP, but that is beyond my control.) Karmic autofs starts correctly about half of the time, depending on whether the NFS mount finishes before autofs gets around to reading its config. (I'm not sure if this is the exact same bug.) We clearly want autofs to start when an interface comes up. (In fact, our particular configuration requires that a _particular_ interface be up, but that may be out of scope of this bug.)

Jay (jay-wharfs) wrote :

I'm still seeing this with
automount: files ldap

on 9.10 server x86_64.

seems to work okay once the machine is booted and /etc/init.d/autofs restart is run.

Heiko Harders (heiko-harders) wrote :

I have this problem as well: Ubuntu 9.10 and reading mount points from an ldap server. The ldap server is running on a guest virtual machine, while the host itself needs autofs. Tried moving autofs from S19 in the init scripts to S30 (after libnss-ldap and after qemu-kvm and libvirt-bin), but that didn't help. I also tried Sebastian Veldhuisen his suggestion, but that didn't work either.

fejes (anthony-fejes) wrote :

I am experiencing what I think is the same problem in 10.04. It wasn't a problem in any of the versions prior to this on the one machine, but an upgrade recently caused this to affect ~50% of cold boots.

It's not difficult to solve - once the failed login happens (home is not mapped), I go to one of the terminals (crtl-alt-F1), and issue:

sudo /etc/init.d/autofs reload

After this, I can switch back to the KDE/Gnome prompt and login through the graphical prompt, which is now crtl-alt-F8.

Tails (tails-tailszefox) wrote :

Problem is still there in the final release of Ubuntu 10.04. autofs isn't started at all when booting up, which mean I have to manually start it each time with a 'service autofs start'.

So far none of the workarounds I tried seem to work. This is pretty troublesome, as having to manually start autofs after each reboot is quite annoying.

ahenric (ahenric) wrote :

I also can confirm the problem in Kubuntu 10.04. Autofs is not started when booting, only with 'service autofs start'. It was working though in Kubuntu 9.10. So something changed again.

Robbie Williamson (robbiew) wrote :

Can anyone confirm whether or not this problem still exists in 10.04.1 or 10.10? I'm asking because we pushed out an upstart patch that works around an issue with writing to /dev/console in the kernel, which was causing certain services trying to write to /dev/console, not to start. See http://bugs.launchpad.net/bugs/554172 for details (warning, it's LONG)

For me, running Maverick, this problem still exists.
It seems autofs is starting up correctly when changing the upstart configuration file for autofs.

--- /tmp/autofs.conf.orig 2010-10-12 20:08:12.000000000 +0200
+++ /etc/init/autofs.conf 2010-10-10 21:25:34.000000000 +0200
@@ -2,7 +2,8 @@
 author "Chuck Short <email address hidden>"

 start on (filesystem
- and net-device-up IFACE!=lo)
+ and net-device-up )
 stop on runlevel[!2345]

 console output

It do have the impression that system startup takes longer.

Paul Omernik (leviathor) wrote :

I am noticing this in my Maverick installations, and am now seeing it in some of my 10.04.1 installations as well. I did not have this issue with 10.04 or previous, though I was (am) affected by the boot priority of NIS bug (569757), which affects auto mounts.

It seems that Hans' post #25 has alleviated the issue for me on one Maverick system; I've yet to upgrade/triage further Lucid/Maverick systems.

I can confirm that Hans' upstart configuration change in post #25 fixes the bug in Karmic 9.10. I didn't notice the start up time being any slower but I haven't measured it.

Jeremy Nickurak (nickurak) wrote :

Still hitting this bug under natty as well.

Claudio Bernardini (claudiob) wrote :

I had the same bug in the past but with 10.10 everything was smooth.
With Natty 11.04 the bug is hitting again.
When I restart autofs the automounter starts to work as expected.

Jeremy Nickurak (nickurak) wrote :

Is this an Ubuntu bug? Is there a configuration problem in Ubuntu? Upstart problem? Or is it an upstream issue? This has been kicking around for over 5 years now.

How does the workaround in post #25 work? It seems like that would start autofs as soon as any network device comes up, when what you'd want to do is only bring it up when a non-loopback device comes up..... so I'd expect the original version to work better than the workaround.

mrtvfuencxozd (mrtvfuencxozd) wrote :

post #25 did not seem to work for me (10.10).

In my case , NFS entries I need to mount are stored in a NIS server.
as a fix I've created the following file :
description "Wait for NIS"
author "moi"
start on starting autofs

        while [ ! $(pgrep ypbind) ] ; do
        sleep 5
        sleep 5
end script

something similar can probably be created to check ldap/... before starting autofs

Daniel Miller (dmiller) wrote :

Bug still exists on Natty. A revised script for NetworkManager seems to fix it for me:



# Local interface changes are irrelevant to this
if [ "$IFACE" = lo ]; then
    exit 0

reload autofs

Hello Canonical guys,

I have this problem specially for new and fast machines. Boot is just too fast. If autofs uses network shares (nis, nis+, ldap), it only can start autofs after the network is up (upstart bug?). Another solution is to start immediatelly (for static entries) and, after network is up, reload the configuration. If the reload script does not hurt other cases, please add it to autofs5-ldap package.

This bug hurts enterprise clients. Other distribution just works with autofs/ldap. Ubuntu should even have a gui in order to easily configure autofs/ldap.

I also noticed a very similar problem with samba. As it got up before network, it does not find the DC and it avoids to auth users until it is restarted. This is very anoying, specially for a cups print server.

FYI, in my case, 11.04, reload autofs is not enough. I need to use restart autofs

Gergely Katona (gkatona) wrote :

I maintain a 10.04 LTS based cluster of workstations where each node is subject to some stress and need to be restarted once in a while. Autofs maps of nfs shares are communicated via LDAP. Autofs fails to initialize (or even start in some rare cases) in its default configuration in about 50% of all boots, making the nodes unusable until autofs is restarted. In addition to this bug sometimes bind mounts fail to initialize at boot and need a remount.
After trying out several suggested workarounds unsuccessfully I am using the following not too elegant rc.local script, which hammers autofs into submission (works 99% of the time). I also advise my users not to restart their computers unless absolutely necessary.

#!/bin/sh -e
# rc.local
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
# In order to enable or disable this script just change the execution
# bits.
# By default this script does nothing.

sleep 10
service autofs start &
sleep 60
service autofs restart &
sleep 60
service autofs restart &
sleep 60
mount -a

exit 0

Still present in oneiric 11.10

Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in upstart (Ubuntu):
status: New → Confirmed
Sorin Sbarnea (ssbarnea) wrote :

I just want to add that I got the same bug on 11.10 and that the rc.local trick doesn't seem to work but if I run `reload autofs` once as a local user, it will start to work instantly!

Vernon Tang (vtang) on 2012-04-12
Changed in autofs5 (Ubuntu):
status: New → Confirmed
Thomas Schweikle (tps) wrote :

Same for Ubuntu 12.04 LTS. Since this isn't acceptable in a company environment I've stopped migrating to Ubuntu 12.04 from 10.04. It is something that has to be fixed before any further upgrades.

The whole thing doesn't work not only for Windows XP shares, it doesn't for NFS too. I have to restart autofs after all interfaces are up to make it work. The problem: autofs is started before *all* interfaces are up and running (only waiting on local-loopback). autofs then ignores any interface started afterwards.

Two solutions:
1. make autofs start *after* all configured interfaces are up, or
2. make autofs aware of additional interfaces, respecting and using them after additional interfaces are comming up.

At the moment autofs has to be restarted if interfaces come and go. If interfaces go it might take a long time until autofs exausts an error message about not being able to mount a share. If interfaces come up later autofs ignores them. Really bad, if autofs is used for automounting user homes or such stuff from centralized servers!

Charon (charon030) wrote :

We have the same problems on our clients within our company (ldap+nfs+autofs, 12.04). Because of this bug we started migrating to SUSE for new machines. A fix would be really great.

Hardy Heroin (hardy-heroin) wrote :

Ubuntu 12.04 LTS support here. I can confirm this bug. The automount service needs to retrieve the automount NFS maps auto_home and auto_group from LDAP which it is unable to do because the network isn't up yet. To make things a bit more complicated, these are Kerberized (krb5) NFSv4 mounts.
After automount fails once it fails and doesn't try again when the network is up.
It seems to me this could be a case for upstart to enforce network being up before (network) automounts are attempted or automount being at least so smart to try again some time later or when the networks are up.
This is a serious bug in large scale Linux environments. Of course there are workarounds such at the ones documented in this thread, but they are ugly and why should every user/administrator have to reinvent the wheel when it is clear what the problem is?

Pierre-Marie Dhaussy (pihemde) wrote :

I ha ve the same problem between Synology DS411j NAS shares and a Debian Wheezy autofs install.

Ruben Nielsen (nielsen-ruben) wrote :

I tried a bunch of fixes. Some from this thread. No luck. On Ubuntu 14.04 I ended up simply doing:

cat $AUTOFS_UPSTART | sed "s/^start on runlevel.*$/start on (local-filesystems and net-device-up IFACE!=lo)/" | tee $AUTOFS_UPSTART

works like a charm

Claudio Bernardini (claudiob) wrote :

Solution #43 perfectly works on our machines using autofs mounts on LDAP.
Thanks Ruben!

dino99 (9d9) wrote :

outdated flavor, report about a newer active version if needed

Changed in upstart (Ubuntu):
status: Confirmed → Invalid
Changed in autofs5 (Ubuntu):
status: Confirmed → Invalid
Changed in autofs (Ubuntu):
status: Confirmed → Invalid
Chris Crisafulli (itnet7) wrote :

This is happening with us on Ubuntu 14.04.3 authenticating against ldap using sssd. I have changed my /etc/init/autofs.conf as recommended by Ruben (above #43) and it works for me as well.

Nikolaus Demmel (nikolaus) wrote :

This still happens to us on 14.04.4 with ldap (+ autofs + nss). I'll be trying the proposed fix #43.

Please reopen this bug.

This still happens to us on 16.04.1 lTS with ldap (+ autofs + nfs).

Why is not fixed?

Tronde (tronde) wrote :
Download full text (4.1 KiB)

Good evening,

I see this bug in Ubuntu 16.04.1 LTS with autofs version 5.1.1-1ubuntu3 when I try to access cifs shares from my NAS.

When I try to access the shares with 'cd' or 'ls' after boot with network on both sides up and running I got the error message: "File or Directory not found." After running `sudo systemctl restart autofs.service` it works like a charme.

Here is my '''/etc/auto.master.d/diskstation.autofs''':
/home/tronde/diskstation /etc/auto.diskstation

And my '''/etc/auto.diskstation''':
music -fstype=cifs,uid=1000,credentials=/home/tronde/.smbcredentials ://IP-ADRESSE/music
photo -fstype=cifs,uid=1000,credentials=/home/tronde/.smbcredentials ://IP-ADRESSE/photo
share -fstype=cifs,uid=1000,credentials=/home/tronde/.smbcredentials ://IP-ADRESSE/share
video -fstype=cifs,uid=1000,credentials=/home/tronde/.smbcredentials ://IP-ADRESSE/video
home -fstype=cifs,uid=1000,gid=1000,credentials=/home/tronde/.smbcredentials ://IP-ADRESSE/home

The service configuration looks like:
cat /run/systemd/generator.late/graphical.target.wants/autofs.service
# Automatically generated by systemd-sysv-generator

Description=LSB: Automounts filesystems on demand

ExecStart=/etc/init.d/autofs start
ExecStop=/etc/init.d/autofs stop
ExecReload=/etc/init.d/autofs reload

The source of this configuration from '''/etc/init.d/autofs''':

cat /etc/init.d/autofs
#! /bin/sh

# Provides: autofs
# Required-Start: $network $remote_fs $syslog
# Required-Stop: $network $remote_fs $syslog
# Should-Start: ypbind nslcd slapd
# Should-Stop: ypbind nslcd slapd
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Automounts filesystems on demand
# Description: Automounts filesystems on demand

# Location of the automount daemon and the init directory

test -e $DAEMON || exit 0

export PATH

. /lib/lsb/init-functions

# load customized configuation settings
if [ -r /etc/default/autofs ]; then
 . /etc/default/autofs

start_stop_autofs() {
 start-stop-daemon "$@" --pidfile $PIDFILE --exec $DAEMON -- \
  $OPTIONS --pid-file $PIDFILE

start() {
 log_action_begin_msg "Starting $PROG"

 if ! grep -qw autofs /proc/filesystems
  if ! modprobe autofs4 >/dev/null 2>&1
   log_action_end_msg 1 "failed to load autofs4 module"
   return 1
 elif [ -f /proc/modules ] && grep -q "^autofs[^4]" /proc/modules
  log_action_end_msg 1 "autofs kernel module is loaded, autofs4 required"
  return 1

 if ! start_stop_autofs --start --oknodo --quie...


Tronde (tronde) wrote :

I was able to reproduce the issue on a fresh install of Xenial today. You could find the degug log at: https://paste.ubuntuusers.de/423433/

Changed in autofs (Ubuntu):
status: Invalid → New
dino99 (9d9) wrote :

Package upgrade is needed

autofs (5.1.2-1) unstable; urgency=medium

  * New upstream release [June 2016] (Closes: #846054).
  * Build with "--disable-mount-locking" (Closes: #721331).
  * service: fixed path to "kill" utility (Closes: #785563).
  * Modernised and converted Vcs URLs to HTTPS.
  * Standards-Version: 3.9.8.
  * Removed myself from Uploaders.

 -- Dmitry Smirnov <email address hidden> Fri, 23 Dec 2016 10:34:46 +1100

tags: added: upgrade-software-version xenial yakkety zesty
Changed in autofs (Ubuntu):
status: New → Confirmed
Tronde (tronde) wrote :

Would there be an updated version in the next xenial point release? Is there any chance to get autofs (5.1.2-1) at all?

dino99 (9d9) on 2017-12-19
tags: removed: yakkety zesty
summary: - autofs needs to be restarted to pick up some shares
+ [SRU] [xenial] autofs needs to be restarted to pick up some shares

A good first step would be for someone to check that the bug is fixed in bionic, as the package has been updated there.

Tronde (tronde) wrote :

I've just checked in Bionic:

$ uname -rvsop
Linux 4.15.0-10-generic #11-Ubuntu SMP Tue Feb 13 18:23:35 UTC 2018 x86_64 GNU/Linux
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu Bionic Beaver (development branch)
Release: 18.04
Codename: bionic
$ sudo apt list autofs
Auflistung... Fertig
autofs/bionic,now 5.1.2-1ubuntu2 amd64 [installiert]

In this version the bug seems to be fixed. AutoFS works as expected.

Tronde (tronde) wrote :

Is there anything else to do? Any chance to get a version update in Xenial?

Thanks for the ping on this lnog standing bug @Tronde.
I updated the state accordingly.

If there is a change to be identified since Xenial->Bionic one could try to SRU fix it in Xenial.
But I took a (quick) look and found nothing obvious.
There are major changes like going from sysV init in /etc/init.d/autofs to a native systemd service in /lib/systemd/system/autofs.service.
One would need to debug if there is something that can be brought into the systemV init to fix it.

I appreciate your former steps to reproduce, but they failed for me :-/
Without having more time debugging why I can't reproduce atm I'd need to ask you (or others) to debug what the missing new bit might be to fix up xenial.

Changed in autofs (Ubuntu):
status: Confirmed → Fix Released
no longer affects: autofs5 (Ubuntu Xenial)
no longer affects: upstart (Ubuntu Xenial)
Changed in autofs (Ubuntu Xenial):
status: New → Incomplete
Tronde (tronde) wrote :

Today I've checked in Xenial with autofs 5.1.1-1ubuntu3.1. So far it seems to be working here as well. I would guess this bug is fixed then.

Tronde (tronde) wrote :

Ok, I'm lost. Today I've checked in another Xenial with autofs 5.1.1-1ubuntu3.1. Kernel is 4.4.0-116-generic as in the system of my last post. But in this system I have to restart autofs.service each time after a (re)boot otherwise I cannot access the shares. I really don't get it.

Maybe some other users could try to reproduce the problem.

Tronde (tronde) wrote :


It's me again. So with today I've checked with three different hosts. All running Xenial with autofs 5.1.1-1ubuntu3.1 and Kernel 4.4.0-116-generic. The host are a Thinkpad T410, X201 and a VirtualBox guest system.

While the VirtualBox guest system works just fine, the autofs on the Thinkpads did not work directly after boot. The autofs.service needs to be restarted first in order to get autofs to work.

I have used tcpdump an strace to get some better look on what is happening on the Thinkpads. So I did the following:

1. Boot Thinkpad
2. Setup tcpdump/wireshark snoop
3. Check that autofs is running with `sudo systemctl status autofs.service`
4. Run `strace ls -ld $MOUNTPOINT` where $MOUNTPOINT is the automounter share from automounter map

In strace I see:
lstat("$MOUNTPOINT", 0x14861a0) = -1 ENOENT (No such file or directory)

The snoop shows that no single packet goes over the network to the NAS. Next step:

5. Run `sudo systemctl status autofs.service`
6. Run `strace ls -ld $MOUNTPOINT` where $MOUNTPOINT is the automounter share from automounter map

Now, in strace I see:
lstat("$MOUNTPOINT", {st_mode=S_IFDIR|0777, st_size=0, ...}) = 0

And in the snoop I see the expected packages crossing the network.

That's it. I do not know what else I could do to help with this matter. If you need additional information tell me what you need, please.


Tronde (tronde) wrote :

Good Moring,

I would like to add some news. I have removed and purged autofs from one of my Thinkpads and installed autofs 5.1.2-1 from Bionic in Xenial. I have used the packages from http://archive.ubuntu.com/ubuntu/pool/main/a/autofs/autofs_5.1.2-1ubuntu2_amd64.de

Surprise, the behaviour is exactly the same as in version 5.1.1. It won't work until the autofs.service is restarted.

Lenin (gagarin) wrote :

Of 100 machines, it works 95 times. There's cases where it'd still fail with 18.04 Bionic here.

summary: - [SRU] [xenial] autofs needs to be restarted to pick up some shares
+ [SRU] [xenial] [bionic] autofs needs to be restarted to pick up some
+ shares
Kevin (kvasko) wrote :

I'm running into this bug as well on 16.04.5

I've tried the autofs.service (SystemD) solution. I've tried the startup scripts in the rc.local and it "seems" to work most of the time, but not really ideal.

Jeff Davis (jdavis-n) wrote :

The issue seems to have returned with an update in 16.04 in mid-november. I will continue to peruse the above solutions and see if there's anything I have not tried yet but so far no joy. OpenLDAP + autofs still requires a restart of autofs.

Tronde (tronde) wrote :

Ubuntu 18.04.1 with autofs version 5.1.2-1ubuntu3 --> Still the same problem.
Service restart is required to get autofs to work.

How could I help to triage this bug? What information are needed to help reproduce it so it could be fixed?

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Duplicates of this bug

Other bug subscribers