netbooting the bionic live CD over NFS goes straight to maintenance mode :
Affects | Status | Importance | Assigned to | Milestone | ||
---|---|---|---|---|---|---|
| systemd |
Unknown
|
Unknown
|
|||
systemd (Ubuntu) | Status tracked in Disco | |||||
| Xenial |
Medium
|
Victor Tapia | |||
| Bionic |
Medium
|
Victor Tapia | |||
| Cosmic |
Medium
|
Victor Tapia | |||
| Disco |
Medium
|
Dimitri John Ledkov |
Bug Description
[Impact]
Mounting manually a network share (NFS) and masking it breaks the state of other units (and their dependencies).
Casper is masking a mounted NFS share, blocking the normal boot process as described in the original description, but the issue comes from systemd.
[Test Case]
- NFS mount point at /media
root@iscsi-
10.230.
- Test mount point (/test) defined in /etc/fstab:
root@iscsi-
tmpfs /test tmpfs nosuid,nodev 0 0
1. If media.mount is not masked, everything works fine:
root@iscsi-
root@iscsi-
Active: active (mounted) since Thu 2018-11-15 16:03:59 UTC; 3 weeks 6 days ago
root@iscsi-
Active: inactive (dead) since Thu 2018-12-13 10:33:52 UTC; 4min 11s ago
root@iscsi-
root@iscsi-
Active: active (mounted) since Thu 2018-12-13 10:38:13 UTC; 3s ago
root@iscsi-
tmpfs on /test type tmpfs (rw,nosuid,
root@iscsi-
root@iscsi-
Active: inactive (dead) since Thu 2018-12-13 10:38:32 UTC; 3s ago
root@iscsi-
2. If media.mount is masked, other mounts are failing:
root@iscsi-
Created symlink /etc/systemd/
root@iscsi-
Job for test.mount failed.
See "systemctl status test.mount" and "journalctl -xe" for details.
root@iscsi-
Active: failed (Result: protocol) since Thu 2018-12-13 10:40:06 UTC; 10s ago
root@iscsi-
tmpfs on /test type tmpfs (rw,nosuid,
root@iscsi-
root@iscsi-
Active: failed (Result: protocol) since Thu 2018-12-13 10:40:06 UTC; 25s ago
root@iscsi-
tmpfs on /test type tmpfs (rw,nosuid,
root@iscsi-
Job for test.mount failed.
See "systemctl status test.mount" and "journalctl -xe" for details.
root@iscsi-
tmpfs on /test type tmpfs (rw,nosuid,
tmpfs on /test type tmpfs (rw,nosuid,
root@iscsi-
root@iscsi-
tmpfs on /test type tmpfs (rw,nosuid,
tmpfs on /test type tmpfs (rw,nosuid,
[Regression potential]
Minimal. Originally, one failing mount point blocked the processing of the rest due to how the return codes were handled for every line in /proc/self/
[Other Info]
Upstream bug: https:/
Fixed upstream with commit: https:/
[Original Description]
netbooting the bionic live CD[1] over NFS goes straight to maintenance mode :
[1] http://
# casper.log
Begin: Adding live session user... ... dbus-daemon[568]: [session uid=999 pid=568] Activating service name='org.
dbus-daemon[568]: [session uid=999 pid=568] Successfully activated service 'org.gtk.
dbus-daemon[568]: [session uid=999 pid=568] Activating service name='org.
fuse: device not found, try 'modprobe fuse' first
dbus-daemon[568]: [session uid=999 pid=568] Successfully activated service 'org.gtk.
(gvfsd-
(gvfsd-
A connection to the bus can't be made
done.
Begin: Setting up init... ... done.
Eric Desrochers (slashd) wrote : | #1 |
summary: |
- fuse: device not found, try 'modprobe fuse' first + netbooting the bionic live CD over NFS goes straight for maintenance + mode : |
summary: |
- netbooting the bionic live CD over NFS goes straight for maintenance - mode : + netbooting the bionic live CD over NFS goes straight to maintenance mode + : |
description: | updated |
Jean-Baptiste Lallement (jibel) wrote : | #2 |
From the journal
[ 20.311413] ubuntu systemd[1]: sys-kernel-
[ 20.311594] ubuntu systemd[1]: Mounting Kernel Configuration File System...
[ 20.313502] ubuntu mount[813]: mount: /sys/kernel/config: configfs already mounted on /sys/kernel/config.
[ 20.313793] ubuntu systemd[1]: sys-kernel-
[ 20.313842] ubuntu systemd[1]: sys-kernel-
[ 20.314013] ubuntu systemd[1]: Failed to mount Kernel Configuration File System.
[ 20.325702] ubuntu systemd[1]: Mounting FUSE Control File System...
[ 20.325777] ubuntu mount[814]: mount: /sys/fs/
[ 20.326038] ubuntu systemd[1]: sys-fs-
[ 20.326089] ubuntu systemd[1]: sys-fs-
[ 20.326248] ubuntu systemd[1]: Failed to mount FUSE Control File System.
Taras Prokopenko (tpro) wrote : | #3 |
For temporary solution see
https:/
Launchpad Janitor (janitor) wrote : | #4 |
Status changed to 'Confirmed' because the bug affects multiple users.
Changed in casper (Ubuntu): | |
status: | New → Confirmed |
Changed in casper (Ubuntu): | |
importance: | Undecided → High |
Eric Desrochers (slashd) wrote : | #5 |
The sequence of failure seems to be the following:
-- Unit dev-mqueue.mount has failed.
-- Unit sys-kernel-
-- Unit dev-hugepages.mount has failed.
-- Unit sys-kernel-
-- Unit sys-fs-
-- Unit tmp.mount has failed.
-- Unit local-fs.target has failed.
-- Unit dns-clean.service has failed.
-- Unit systemd-
-- Unit systemd-
-- Unit sys-kernel-
-- Unit sys-fs-
# journal -xb:
-- The limits controlling how much disk space is used by the journal may
-- be configured with SystemMaxUse=, SystemKeepFree=, SystemMaxFileSize=,
-- RuntimeMaxUse=, RuntimeKeepFree=, RuntimeMaxFileSize= settings in
-- /etc/systemd/
Apr 03 12:59:15 ubuntu systemd-
Apr 03 12:59:15 ubuntu systemd[1]: Failed to set up mount unit: Device or resource busy
Apr 03 12:59:15 ubuntu systemd[1]: dev-mqueue.mount: Mount process finished, but there is no mount.
Apr 03 12:59:15 ubuntu systemd[1]: dev-mqueue.mount: Failed with result 'protocol'.
Apr 03 12:59:15 ubuntu systemd[1]: Failed to mount POSIX Message Queue File System.
-- Subject: Unit dev-mqueue.mount has failed
-- Defined-By: systemd
-- Support: http://
--
-- Unit dev-mqueue.mount has failed.
Eric Desrochers (slashd) wrote : | #6 |
Seems like this recipe is enough to start gnome (as a potential workaround until this get fix) :
# systemctl mask tmp.mount
# ctrl-d
Launchpad Janitor (janitor) wrote : | #7 |
Status changed to 'Confirmed' because the bug affects multiple users.
Changed in systemd (Ubuntu): | |
status: | New → Confirmed |
Changed in casper (Ubuntu): | |
status: | Confirmed → Fix Released |
Changed in systemd (Ubuntu): | |
status: | Confirmed → Fix Released |
beta-tester (alpha-beta-release) wrote : | #8 |
is the status "Fix Released" from 2018-04-27 mean,
that the fix is already included in release ubuntu 18.04,
or will it be included to 18.04.1 the first time?
Stephen Early (steve-greenend) wrote : | #9 |
I've just checked: a NFS boot image built from the currently released Ubuntu 18.04 still has this bug. I can't see any relevant commits to casper or systemd, either.
Simone Scisciani (scisciani) wrote : | #10 |
sorry, by mistake I put the tick "fix released" and I can not put it back in "confirmed". The bug has not yet been fixed
Eric Desrochers (slashd) wrote : | #11 |
I set both back to 'Confirmed'.
Changed in casper (Ubuntu): | |
status: | Fix Released → Confirmed |
Changed in systemd (Ubuntu): | |
status: | Fix Released → Confirmed |
Woodrow Shen (woodrow-shen) wrote : | #12 |
I can confirm that the ubiquity can finish the installation via appending the string of systemd mask services "systemd.
richud (richud.com) wrote : | #13 |
Can confirm Woodrow Shen's workaround works for me too, (also tested ok with kubuntu and ubuntu-mate 18.04 automated deploys).
Woodrow Shen (woodrow-shen) wrote : | #14 |
I keep doing some experiments (with Dell/HP laptops) and there are some conclusions currently:
1. why does the issue happen (not real root cause)
Due to local-fs.target, the fstab-generator automatically adds dependencies of type Before= to all mount units that refer to local mount points for this target unit. In addition, it adds dependencies of type Wants= to this target unit for those mounts listed in /etc/fstab that have the auto mount option set[1].
Therefore, the emergency shell is triggered by local-fs.target which is dependent on failures of several systemd mounts.
2. provides 2 approaches for workaround fix
1) append "systemd.
2) add "toram" into kernel boot options.
it would completely decompress the filesystem into RAM, which requires 3-4x more RAM, and is hence undesired[2].
3. trade-off between workarounds
Until we find the solution, the better way to workaround is to use "toram" to fix the issue. The reason behind it is we not only get speed-up the installation but also avoid the unstable network with nfs, despite requiring more RAM.
4. The real solution concern
I think the solution may be more complicated if we really want to fix, and ideally we have to consider the cases of normal (e.g. from usb stick) and nfs mount to satisfy the conditions to avoid systemd failure with dependencies or protocol.
[1] https:/
[2] https:/
David Coronel (davecore) wrote : | #15 |
I confirm the "toram" workaround from Woodrow allows me to PXE netboot the most recent Ubuntu 18.04 Desktop amd64 ISO image.
Martin Bogomolni (martinbogo) wrote : | #16 |
The "toram" workaround does not work for me attempting to boot on a SuperMicro X10DRW motherboard with 128GB of ram installed + SATA SSD. I have also added Woodrow Shen's workaround in the command line.
The difference is that I am attempting to install 18.04 server.
Environment is:
Debian "Wheezy" server runnig dnsmasq to provide DHCP and tftp service
Synced/mirrored Ubuntu 18.04 repository being served via nginx
-----
May 29 18:08:13 ubuntu systemd[1]: dev-mqueue.mount: Mount process finished, but there is no mount.
May 29 18:08:13 ubuntu systemd[1]: dev-mqueue.mount: Failed with result 'protocol'.
May 29 18:08:13 ubuntu systemd[1]: Failed to mount POSIX Message Queue File System.
-- Subject: Unit dev-mqueue.mount has failed
-- Defined-By: systemd
-- Support: http://
--
-- Unit dev-mqueue.mount has failed.
Woodrow Shen (woodrow-shen) wrote : | #17 |
"toram" option only affected desktop with casper.
beta-tester (alpha-beta-release) wrote : | #18 |
"toram" works to me and will also fix the hanging at shutdown/reboot.
but "toram" takes more that twice as long to boot into the desktop (~3minutes instead of ~1minute).
Jon K (drkatz) wrote : | #19 |
Appears to still be affecting 18.04.1
Skye K (skyebuddha) wrote : | #20 |
I am having a similar issue in 18.04 and 18.04.1. I am trying to netboot and retrieve the install files from nfs or http. I have seen a few others have similar issues: https:/
thh (thh01217) wrote : | #21 |
Based on in-depth analysis, I found the cause of the error:
“Apr 03 12:59:15 ubuntu systemd[1]: Failed to set up mount unit: Device or resource busy”
call tree on systemd mount.c & unit.c:
mount_dispatch_io
-> mount_load_
-> mount_setup_unit
-> mount_setup_
-> mount_add_extras
-> unit_set_
"unit_set_slice" return EBUSY always, because of nfsroot always active state in netbooting,
"mount_dispatch_io" give up updating mount state when "mount_
finally, all systemd mount service failed and then goto emergency shell.
lepidas blades rompolos (lepidas) wrote : | #22 |
This bug is still affecting me, downloaded yesterday iso images from ubuntu,
Lukas (lukas-wringer) wrote : | #23 |
is this bug lost or not assigned to someone? It is still broken, of course there are workarounds but no one knows the exact consequences of them?
no longer affects: | ubiquity (Ubuntu) |
Eric Desrochers (slashd) wrote : | #24 |
I started to look at this problem from scratch since it's been a while since I have reported it....
It seems to go into emergency mode due to a failed attempt to start unit "tmp.mount" :
# /var/log/boot.log
65 emergency.target: Enqueued job emergency.
66 tmp.mount: Unit entered failed state.
Adding "systemd.
APPEND initrd=
Note:
- I did the test by curiosity w/ Artful/17.10 (systemd-234) and it works, so it's possibly something between v234 and v237 which introduced the behaviour for tmp.mount, a change in mount, ...
- Problem is also reproducible in Cosmic, and journalctl was a little bit more verbose in Cosmic than it was for Bionic in my testing :
$ journalctl -a -u tmp.mount
-- Logs begin at Wed 2018-10-10 20:15:36 UTC, end at Wed 2018-10-10 20:15:43 UTC. --
Oct 10 20:15:36 ubuntu systemd[1]: tmp.mount: Directory /tmp to mount over is not empty, mounting anyway.
Oct 10 20:15:36 ubuntu systemd[1]: Mounting /tmp...
Oct 10 20:15:36 ubuntu systemd[1]: tmp.mount: Mount process finished, but there is no mount.
Oct 10 20:15:36 ubuntu systemd[1]: tmp.mount: Failed with result 'protocol'.
Oct 10 20:15:36 ubuntu systemd[1]: Failed to mount /tmp.
# src/core/mount.c
802 static void mount_enter_
803 assert(m);
804
805 if (m->result == MOUNT_SUCCESS)
806 m->result = f;
807
808 if (m->result != MOUNT_SUCCESS)
809 log_unit_
...
1282 switch (m->state) {
1283
1284 case MOUNT_MOUNTING:
1285 /* Our mount point has not appeared in mountinfo. Something went wrong. */
1286
1287 if (f == MOUNT_SUCCESS) {
1288 /* Either /bin/mount has an unexpected definition of success,
1289 * or someone raced us and we lost. */
1290 log_unit_
1291 f = MOUNT_FAILURE_
1292 }
and m->result is indeed equalt to "MOUNT_
1955 [MOUNT_
I'll try to instrument things and create a custom ISO for further debugging/testing. This is where am at the moment.
- Eric
Eric Desrochers (slashd) wrote : | #26 |
So far I highly suspect this commit[1] to be the possible offending one and it would "fit" with the bionic systemd version in Ubuntu[2] vs upstream introduction of the change :
$ git describe --contains 006aabbd05
v237~47^2~2
[1] 006aabbd0 mount: mountinfo event is supposed to always arrive before SIGCHLD
[2] rmadison
systemd | 237-3ubuntu10 | bionic | source, amd64, arm64, armhf, i386, ppc64el, s390x
systemd | 237-3ubuntu10.3 | bionic-updates | source, amd64, arm64, armhf, i386, ppc64el, s390x
Eric Desrochers (slashd) wrote : | #27 |
With systemd on Xenial & Artful, there is no instruction in the case of MOUNT_MOUNTING which re-enforce why it is working with these releases :
# src/core/mount.c
1182 case MOUNT_MOUNTING:
1183 case MOUNT_MOUNTING_
1184 case MOUNT_MOUNTING_
1185 case MOUNT_MOUNTING_
1186
1187 if (f == MOUNT_SUCCESS)
1188 mount_enter_
1189 else if (m->from_
1190 mount_enter_
1191 else
1192 mount_enter_dead(m, f);
1193 break;
So most likely, falls in the MOUNT_MOUNTING_
# src/basic/
MOUNT_MOUNTING, /* /usr/bin/mount is running, but the mount is not done yet. */
Eric Desrochers (slashd) wrote : | #28 |
Additionally,
tmp.mount unit configuration :
https:/
# tmp.mount
--
..
ConditionPathIs
..
--
ConditionPathIs
When there is an exclamation mark ("!"), the validation is negated
For "ConditionPathI
Eric Desrochers (slashd) wrote : | #29 |
The "existing" /tmp come from casper code :
bionic/
...
cat > $FSTAB <<EOF
${UNIONFS} / ${UNIONFS} rw 0 0
tmpfs /tmp tmpfs nosuid,nodev 0 0
EOF
...
Eric Desrochers (slashd) wrote : | #30 |
By reading this article : https:/
I really start thinking the easiest way to fix it is as describe here :
...
/tmp as location for volatile, temporary userspace file system objects (X)
...
It is possible to disable the automatic mounting of some (but not all) of these file systems, if that is required. These are marked with (X) in the list above. You may disable them simply by masking them:
systemctl mask dev-hugepages.mount
...
I have tested by masking "tmp.mount", but the official documentation recommend "dev-hegepages.
pxe configuration line :
"
APPEND initrd=
"
I'm starting to think that this may become the final solution to the new systemd behaviour as indicated in official documentation above and not only a workaround.
Thoughts ?
Eric Desrochers (slashd) wrote : | #31 |
Does someone impacted can test the dev-hugepages.mount masking within their PXE configuration and let me know how it works ?
systemd.
tags: | added: sts |
Eric Desrochers (slashd) wrote : | #32 |
According to the documentation, the recommendation is to mask dev-hugepages.
Feel free to try both and let me know the outcome, results may varies from one setup to another.
- Eric
Marcel Partap (empee584) wrote : | #33 |
mmh #32 not enough.. I had to convert it /tmp to an overlay mount, as regardless how early, it seems there'll be some important file (custom_
Brian Nelson (bhnelson) wrote : | #34 |
Eric,
The 'recommendation' for masking dev-hugepages you site from that wiki page is clearly just an example of how you could disable one of the various mounts described there. I don't think it's a recommendation to fix anything in particular.
FWIW: Masking dev-hugepages doesn't seem to help much for me. Masking tmp seems to let the system boot up, but still the other mount services fail and systemd status is red 'degraded.'
I've ended up masking all affected mounts (per comment 12 and 14) with the addition of masking run-rpc_
I'm still having problems logging into Gnome with a user with NFS home. I'm not sure if that's related to this issue or something else though. Still looking at that.
I think you're on the right track in comment 27. I get the feeling that somewhere along the line a result of 'this is already mounted' changed from a success to a failure in systemd, possibly due to the change in mount.c you pasted.
beta-tester (alpha-beta-release) wrote : | #35 |
@Eric,
i just tested what you suggested at your comment #31 with Ubuntu 18.10 release ...
KERNEL http://
INITRD http://
APPEND nfsroot=
... but then i again run straight into emergency console.
(not sure, if that information still is helpful)
at the moment "systemd.
but with the issues of:
- having few red "FAILED" messages at boot time;
- at reboot/poweroff often run into "stop jobs" are running endless at shutting down;
- or hangs for ever at "Starting Shuts down the 'live' preinstalled system clearly...";
the second best solution is "toram", there i never observed the issues of any "FAILED" message and never hat the issues at reboot/poweroff (maybe there are race conditions - "toram" nothing has to be loaded fron nfs via network).
but "toram" takes a lot of time, because the whole squashfs-image has to be loaded into ram first, before it can be mounted.
BTW: lununtu 18.10 shows the same behavior as ubuntu 18.10, but it does not show up the issues with reboot/poweroff hanging or endles running stop jobs.
Brian Nelson (bhnelson) wrote : | #36 |
So I've found a complete work-around for this. I also found that this issue is NOT new in 18.04 as it also affects 16.x (and likely 15 and 17 too). However it is DIFFERENT in 18.04. More details below.
TL;DR:
You need to netboot with an initramfs that doesn't have 'scripts/
From whatever machine where netboot initramfs is created:
# Disable/block the problem script
mkdir -p /etc/initramfs-
touch /etc/initramfs-
# rebuild initramfs
update-initramfs -u
# Move/copy the new file to the netboot server
The issue here is that systemd isn't able to update its mount status properly. In the case of 18.04, all of the 'failed' mounts are actually successfully mounted. This includes /tmp. BUT systemd doesn't recognize that fact and marks them all as red/failed.
In 16.04 this issue is a bit different. When booting, all of the same mounts are again mounted successfully AND systemd shows them all as green/active. BUT if you try to stop/unmount any of them you will see a similar situation. The unmount will actually succeed, but systemd will report an unmount failure and continue to show the unit as green/active.
Per the call trace thh noted in comment #21:
From what I can tell, mount_load_
The failure seems to be caused by the fact that the cdrom.mount unit (NFS mount) is masked. Once it's unmasked the failure doesn't occur and all mounts work as expected. You can actually observe this from within a 'broken' boot at the emergency prompt:
rm /lib/systemd/
systemctl daemon-reload
umount /tmp (ensure it's gone, there may be multiple mounts)
systemctl reset-failed tmp.mount
systemctl start tmp.mount
..and it will succeed
I did verify this issue by actually booting from a 'real' DVD and the problem doesn't happen there. It's something specific to having the image mounted over NFS and masking it's unit.
For reference, the disable_cdrom.mount script was the solution for this bug
https:/
Robert Giles (rgiles) wrote : | #37 |
Brian,
Thanks for sleuthing out a fix for this; I wanted to add that this also seems to work for netbooting 18.10.
This issue also affected Linux Mint 19.2, and Brian's workaround and fix in #36 work for that distribution as well.
I used the workaround steps to resume the broken boot, then followed the fix steps and rebuilt the initramfs using:
# rebuild initramfs
/usr/sbin/
then copied the resulting .img to my netboot server.
description: | updated |
no longer affects: | casper (Ubuntu) |
beta-tester (alpha-beta-release) wrote : | #39 |
hello Victor (vtapia),
does it mean the next comming ubuntu Live release 18.04.2 sould PXE boot without any tweak?
is it already in the "Ubuntu 19.04 (Disco Dingo) Daily Build"?
Victor Tapia (vtapia) wrote : | #40 |
Hi,
I'm working on the patches for Disco, Cosmic, Bionic and Xenial. It's not in Disco yet, but the fix process will be tracked in this bug.
Victor Tapia (vtapia) wrote : | #41 |
The attachment "disco-
[This is an automated message performed by a Launchpad user owned by ~brian-murray, for any issue please contact him.]
tags: | added: patch |
Changed in systemd (Ubuntu Disco): | |
assignee: | nobody → Victor Tapia (vtapia) |
Changed in systemd (Ubuntu Cosmic): | |
assignee: | nobody → Victor Tapia (vtapia) |
Changed in systemd (Ubuntu Bionic): | |
assignee: | nobody → Victor Tapia (vtapia) |
Changed in systemd (Ubuntu Xenial): | |
assignee: | nobody → Victor Tapia (vtapia) |
Changed in systemd (Ubuntu Disco): | |
importance: | Undecided → Medium |
Changed in systemd (Ubuntu Cosmic): | |
importance: | Undecided → Medium |
Changed in systemd (Ubuntu Xenial): | |
importance: | Undecided → Medium |
Changed in systemd (Ubuntu Disco): | |
status: | Confirmed → In Progress |
Changed in systemd (Ubuntu Bionic): | |
importance: | Undecided → Medium |
status: | New → In Progress |
Changed in systemd (Ubuntu Cosmic): | |
status: | New → In Progress |
Changed in systemd (Ubuntu Xenial): | |
status: | New → In Progress |
Victor Tapia (vtapia) wrote : | #43 |
Victor Tapia (vtapia) wrote : | #44 |
Victor Tapia (vtapia) wrote : | #45 |
no longer affects: | systemd |
Changed in systemd (Ubuntu Disco): | |
status: | In Progress → Fix Committed |
assignee: | Victor Tapia (vtapia) → Dimitri John Ledkov (xnox) |
beta-tester (alpha-beta-release) wrote : | #46 |
i just tried out the daily build of
http://
from 2019-01-31 07:45 to pxe boot wihout using the "systemd.
but i still goes straight to maintenance mode.
is the fix not included to the daily build yet?
Dimitri John Ledkov (xnox) wrote : | #47 |
No, it would have been impossible for any desktop image (even pending) to contain the new systemd on the 31st of January.
The first image that contains the new systemd is from 20190204 or later.
Changed in systemd (Ubuntu Disco): | |
status: | Fix Committed → Fix Released |
beta-tester (alpha-beta-release) wrote : | #48 |
just tried http://
thank you very much!!!
will the Ubuntu 18.04 LTS (Bionic Beaver) receive that fix in the next point release as well?
Dimitri John Ledkov (xnox) wrote : | #49 |
I'm afraid it just missed it, i think. But we can try.
information type: | Public → Public Security |
information type: | Public Security → Private Security |
information type: | Private Security → Public |
Changed in systemd (Ubuntu Cosmic): | |
status: | In Progress → Triaged |
Guillermo (guillermo-etsetb) wrote : | #50 |
Using Brian's workaround (#36) I'm able to netboot. However, raising the network interfaces fails. Is it related?
journalctl shows:
ubuntu ifup[3055]: Error: any valid prefix is expected rather than "dhcp/dhcp"
Networking does partially work: pinging IP addresses work, but DNS resolution doesn't.
beta-tester (alpha-beta-release) wrote : | #51 |
@Guillermo, i have the same issue with ubuntu 19.04 daily (pending) from 2019-02-14
(without using the workaround) when pxe booting.
/var/log/syslog shows the folowing lines, when i serch for resolv:
```
Feb 14 17:53:27 ubuntu systemd[1]: Starting Restore /etc/resolv.conf if the system crashed before the ppp link was shut down...
Feb 14 17:53:27 ubuntu systemd[1]: Started Restore /etc/resolv.conf if the system crashed before the ppp link was shut down.
Feb 14 17:53:27 ubuntu avahi-daemon[1225]: Failed to open /etc/resolv.conf: Invalid argument
Feb 14 17:53:27 ubuntu kernel: [ 6.498832] Key type dns_resolver registered
Feb 14 17:53:27 ubuntu kernel: [ 6.985154] ACPI BIOS Error (bug): Could not resolve [\_SB.PCI0.
Feb 14 17:53:27 ubuntu kernel: [ 6.985719] ACPI BIOS Error (bug): Could not resolve [\_SB.PCI0.
Feb 14 17:53:27 ubuntu kernel: [ 6.986106] ACPI BIOS Error (bug): Could not resolve [\_SB.PCI0.
Feb 14 17:53:27 ubuntu kernel: [ 6.988112] ACPI BIOS Error (bug): Could not resolve [\_SB.PCI0.
Feb 14 17:53:27 ubuntu kernel: [ 7.347354] ACPI BIOS Error (bug): Could not resolve [\_SB.PCI0.
Feb 14 17:53:27 ubuntu kernel: [ 7.357711] ACPI BIOS Error (bug): Could not resolve [\_SB.PCI0.
Feb 14 17:53:28 ubuntu sh[1400]: grep: /etc/resolv.conf: No such file or directory
Feb 14 17:53:29 ubuntu systemd-
Feb 14 17:53:29 ubuntu systemd-
Feb 14 17:53:29 ubuntu systemd-
Feb 14 17:53:29 ubuntu systemd-
Feb 14 17:53:29 ubuntu systemd-
Feb 14 17:53:29 ubuntu systemd-
Feb 14 17:53:30 ubuntu NetworkManager[
Feb 14 17:53:30 ubuntu NetworkManager[
Feb 14 17:54:29 ubuntu systemd-
```
PS: long time ago ...
Guillermo (guillermo-etsetb) wrote : | #52 |
@beta-tester I can finally boot with network using the workaround in 18.04.2.
The workaround is, however, a bit difficult to apply, unless there is another way to do it. unmkinitramfs will only extract one of the 2 embedded microcodes and thus I'm extracting the initrd manually.
Attaching log from maintenance mode