2016-04-30 21:04:49 |
Sam Van den Eynde |
bug |
|
|
added bug |
2016-04-30 21:05:28 |
Sam Van den Eynde |
description |
It looks like that, when booting off zfs (zfs holds /boot) with the rootdelay boot option set, the boot process fails in the initrd fase, asking to manually import the pool using zpool import -f -R / -N. I only had one system with that parameter set, which I seldom reboot.
I did not find an upstream reference of this bug or behavior.
The error is caused by the the pool is already imported: "zpool status" executed on the initramfs prompt will correctly list the pool and all devices online. To continue, one has to export the pool, re-import it and exit the initramfs prompt after which regular booting continues. Not exporting and reimporting it leaves the pool readonly leading to boot errors further down the road (systemd units failing).
I noticed zfs_autoimport_disable is set to 1 in the initramfs environment, so looking at /usr/share/initramfs-tools/scripts/zfs this section might be the issue (zpool import succeeding, but $ZFS_HEALTH never returning with the correct status (I'm not a programmer but perhaps ZFS_HEALTH is a local variable in the zfs_test_import function)):
delay=${ROOTDELAY:-0}
if [ "$delay" -gt 0 ]
then
# Try to import the pool read-only. If it does not import with
# the ONLINE status, wait and try again. The pool could be
# DEGRADED because a drive is really missing, or it might just
# be slow to be detected.
zfs_test_import
retry_nr=0
while [ "$retry_nr" -lt "$delay" ] && [ "$ZFS_HEALTH" != "ONLINE" ]
do
[ "$quiet" != "y" ] && log_begin_msg "Retrying ZFS read-only import"
/bin/sleep 1
zfs_test_import
retry_nr=$(( $retry_nr + 1 ))
[ "$quiet" != "y" ] && log_end_msg
done
unset retry_nr
unset ZFS_HEALTH
fi
unset delay |
It looks like that, when booting off zfs (zfs holds /boot) with the rootdelay boot option set, the boot process fails in the initrd fase, asking to manually import the pool using zpool import -f -R / -N. I only had one system with that parameter set, which I seldom reboot.
I did not find an upstream reference of this bug or behavior.
The error is caused by the fact the pool is already imported: "zpool status" executed on the initramfs prompt will correctly list the pool and all devices online. To continue, one has to export the pool, re-import it and exit the initramfs prompt after which regular booting continues. Not exporting and reimporting it leaves the pool readonly leading to boot errors further down the road (systemd units failing).
I noticed zfs_autoimport_disable is set to 1 in the initramfs environment, so looking at /usr/share/initramfs-tools/scripts/zfs this section might be the issue (zpool import succeeding, but $ZFS_HEALTH never returning with the correct status (I'm not a programmer but perhaps ZFS_HEALTH is a local variable in the zfs_test_import function)):
delay=${ROOTDELAY:-0}
if [ "$delay" -gt 0 ]
then
# Try to import the pool read-only. If it does not import with
# the ONLINE status, wait and try again. The pool could be
# DEGRADED because a drive is really missing, or it might just
# be slow to be detected.
zfs_test_import
retry_nr=0
while [ "$retry_nr" -lt "$delay" ] && [ "$ZFS_HEALTH" != "ONLINE" ]
do
[ "$quiet" != "y" ] && log_begin_msg "Retrying ZFS read-only import"
/bin/sleep 1
zfs_test_import
retry_nr=$(( $retry_nr + 1 ))
[ "$quiet" != "y" ] && log_end_msg
done
unset retry_nr
unset ZFS_HEALTH
fi
unset delay |
|
2016-04-30 21:11:47 |
Sam Van den Eynde |
description |
It looks like that, when booting off zfs (zfs holds /boot) with the rootdelay boot option set, the boot process fails in the initrd fase, asking to manually import the pool using zpool import -f -R / -N. I only had one system with that parameter set, which I seldom reboot.
I did not find an upstream reference of this bug or behavior.
The error is caused by the fact the pool is already imported: "zpool status" executed on the initramfs prompt will correctly list the pool and all devices online. To continue, one has to export the pool, re-import it and exit the initramfs prompt after which regular booting continues. Not exporting and reimporting it leaves the pool readonly leading to boot errors further down the road (systemd units failing).
I noticed zfs_autoimport_disable is set to 1 in the initramfs environment, so looking at /usr/share/initramfs-tools/scripts/zfs this section might be the issue (zpool import succeeding, but $ZFS_HEALTH never returning with the correct status (I'm not a programmer but perhaps ZFS_HEALTH is a local variable in the zfs_test_import function)):
delay=${ROOTDELAY:-0}
if [ "$delay" -gt 0 ]
then
# Try to import the pool read-only. If it does not import with
# the ONLINE status, wait and try again. The pool could be
# DEGRADED because a drive is really missing, or it might just
# be slow to be detected.
zfs_test_import
retry_nr=0
while [ "$retry_nr" -lt "$delay" ] && [ "$ZFS_HEALTH" != "ONLINE" ]
do
[ "$quiet" != "y" ] && log_begin_msg "Retrying ZFS read-only import"
/bin/sleep 1
zfs_test_import
retry_nr=$(( $retry_nr + 1 ))
[ "$quiet" != "y" ] && log_end_msg
done
unset retry_nr
unset ZFS_HEALTH
fi
unset delay |
It looks like that, when booting off zfs (zfs holds /boot) with the rootdelay boot option set, the boot process fails in the initrd fase, asking to manually import the pool using zpool import -f -R / -N. I only had one system with that parameter set, which I seldom reboot.
I did not find an upstream reference of this bug or behavior.
The error is caused by the fact the pool is already imported: "zpool status" executed on the initramfs prompt will correctly list the pool and all devices online. To continue, one has to export the pool, re-import it and exit the initramfs prompt after which regular booting continues. Not exporting and reimporting it leaves the pool readonly leading to boot errors further down the road (systemd units failing).
I noticed zfs_autoimport_disable is set to 1 in the initramfs environment, so looking at /usr/share/initramfs-tools/scripts/zfs this section might be the issue (zpool import succeeding, but $ZFS_HEALTH never returning with the correct status (I'm not a programmer but perhaps ZFS_HEALTH is a local variable in the zfs_test_import function)):
delay=${ROOTDELAY:-0}
if [ "$delay" -gt 0 ]
then
# Try to import the pool read-only. If it does not import with
# the ONLINE status, wait and try again. The pool could be
# DEGRADED because a drive is really missing, or it might just
# be slow to be detected.
zfs_test_import
retry_nr=0
while [ "$retry_nr" -lt "$delay" ] && [ "$ZFS_HEALTH" != "ONLINE" ]
do
[ "$quiet" != "y" ] && log_begin_msg "Retrying ZFS read-only import"
/bin/sleep 1
zfs_test_import
retry_nr=$(( $retry_nr + 1 ))
[ "$quiet" != "y" ] && log_end_msg
done
unset retry_nr
unset ZFS_HEALTH
fi
unset delay
Edit: to be clear: I removed the rootdelay parameter, regenerated the initrd, and was able to boot successfully afterwards. |
|
2016-05-01 03:13:51 |
Richard Laager |
zfs-linux (Ubuntu): status |
New |
Incomplete |
|
2016-05-02 14:59:59 |
Richard Laager |
zfs-linux (Ubuntu): status |
Incomplete |
Invalid |
|
2016-05-02 20:22:24 |
Sam Van den Eynde |
attachment added |
|
initrd zfs script https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1577057/+attachment/4653859/+files/zfs |
|
2016-05-02 21:40:50 |
Richard Laager |
zfs-linux (Ubuntu): status |
Invalid |
Incomplete |
|
2016-05-03 22:09:14 |
Richard Laager |
attachment added |
|
zfs https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1577057/+attachment/4654842/+files/zfs |
|
2016-05-03 22:09:25 |
Richard Laager |
zfs-linux (Ubuntu): status |
Incomplete |
Confirmed |
|
2016-05-03 22:11:17 |
Richard Laager |
bug |
|
|
added subscriber Richard Laager |
2016-05-03 22:11:26 |
Richard Laager |
zfs-linux (Ubuntu): assignee |
|
Richard Laager (rlaager) |
|
2016-05-06 01:42:29 |
Richard Laager |
zfs-linux (Ubuntu): status |
Confirmed |
In Progress |
|
2016-05-26 21:10:09 |
Launchpad Janitor |
zfs-linux (Ubuntu): status |
In Progress |
Fix Released |
|
2016-05-26 22:23:52 |
Colin Ian King |
zfs-linux (Ubuntu): importance |
Undecided |
Medium |
|
2017-05-15 22:43:07 |
dreamcat4 |
bug |
|
|
added subscriber dreamcat4 |