An 'grep swap' extract from /var/log/boot.log with mountall --debug:
local 6/6 remote 0/0 virtual 11/11 swap 0/1
try_mount: /dev/mapper/lucid-swap_1 waiting for device
try_udev_device: block /dev/mapper/lucid-swap_1 e26b43b0-7782-44da-9a8f-78c7955e3c74 (null)
try_udev_device: /dev/mapper/lucid-swap_1 by name
run_fsck: /dev/mapper/lucid-swap_1: no check required
activating /dev/mapper/lucid-swap_1
spawn: swapon /dev/mapper/lucid-swap_1
spawn: swapon /dev/mapper/lucid-swap_1 [1021]
swapon: /dev/mapper/lucid-swap_1: swapon failed: Device or resource busy
mountall: swapon /dev/mapper/lucid-swap_1 [1021] terminated with status 255
mountall: Problem activating swap: /dev/mapper/lucid-swap_1
mounted: /dev/mapper/lucid-swap_1
swap finished
local 6/6 remote 0/0 virtual 11/11 swap 1/1
Just a guess here... If each of the mountall discovered FSs are mounted in the background by a spawned process (assumed from the logging) then as /home is generally the largest mount on a default install and is going to take the longest, could it be possible that it just happens that as the swap mount has failed all but the /home has mounted ok but mountall has given up waiting due to the failure and killed off the spawned mounts?
An 'grep swap' extract from /var/log/boot.log with mountall --debug:
local 6/6 remote 0/0 virtual 11/11 swap 0/1 lucid-swap_ 1 waiting for device lucid-swap_ 1 e26b43b0- 7782-44da- 9a8f-78c7955e3c 74 (null) lucid-swap_ 1 by name lucid-swap_ 1: no check required lucid-swap_ 1 lucid-swap_ 1 lucid-swap_ 1 [1021] lucid-swap_ 1: swapon failed: Device or resource busy lucid-swap_ 1 [1021] terminated with status 255 lucid-swap_ 1 lucid-swap_ 1
try_mount: /dev/mapper/
try_udev_device: block /dev/mapper/
try_udev_device: /dev/mapper/
run_fsck: /dev/mapper/
activating /dev/mapper/
spawn: swapon /dev/mapper/
spawn: swapon /dev/mapper/
swapon: /dev/mapper/
mountall: swapon /dev/mapper/
mountall: Problem activating swap: /dev/mapper/
mounted: /dev/mapper/
swap finished
local 6/6 remote 0/0 virtual 11/11 swap 1/1
Just a guess here... If each of the mountall discovered FSs are mounted in the background by a spawned process (assumed from the logging) then as /home is generally the largest mount on a default install and is going to take the longest, could it be possible that it just happens that as the swap mount has failed all but the /home has mounted ok but mountall has given up waiting due to the failure and killed off the spawned mounts?