Composing a VM in MAAS with exactly 2048 MB RAM causes the VM to kernel panic

Bug #1797581 reported by Andres Rodriguez on 2018-10-12
This bug affects 8 people
Affects Status Importance Assigned to Milestone
linux (Ubuntu)
qemu (Ubuntu)

Bug Description

Using latest MAAS master, I'm unable to compose a VM over the UI successfully when composed with 2048 MB of RAM. By that I mean that the VM is created, but it fails with a kernel panic.

So far this only occurred in MAAS environments and often went away with e.g. the next daily images. Therefore to reproduce and finally debug/fix this issue we'd need data from anyone being affected. This list will grow as we identify more that is needed.

If you are affected, please attach to the bug
- the kernel used to boot the guest
- the initrd used to boot the guest
- libvirt guest XML used to define the guest
- PXE config provided to the guest
- the cloud image used to start the guest (That will likely not fit as bug
  attachment, consider storing it somewhere and contact us)

TODO: The Maas Team will outline how to get all these artifacts.
TODO: add reference to the comment outlining this (once added)

summary: - [2.5, UI] Composing a VM over the UI is broken
+ [2.5, UI] Composing a VM over the UI is broken - VM has kernel panic
Changed in maas:
milestone: none → 2.5.0rc1
importance: Undecided → Critical
status: New → Triaged
tags: added: ui
Anthony Dillon (ya-bo-ng) wrote :

This doesn't appear to be a UI issue. Steve, has taken a look into the templates but it all seems to be displaying correctly.

Mike Pontillo (mpontillo) wrote :

I can't reproduce this crash. Does the kernel panic happen at commissioning time or later?

Can you attach the output of `virsh dumpxml <vm-name>` for the both the working VM composed over the API, and the non-working VM composed with the UI?

Changed in maas:
status: Triaged → Incomplete

Adding the linux package; we should narrow down the issue to see if it's in kernel space or user space.

summary: - [2.5, UI] Composing a VM over the UI is broken - VM has kernel panic
+ [2.5] Composing a VM with 2048 MB RAM causes kernel panic
Changed in maas:
status: Incomplete → Invalid
Changed in linux (Ubuntu):
status: New → Incomplete

To be clear, the kernel panic is seen when a VM is composed in MAAS with exactly 2048 MB of RAM. Composing with 2047 or 2049 MB RAM results in a working VM.

Mike Pontillo (mpontillo) wrote :

We can't run apport-collect since the machine doesn't boot, but this was seen with a non-tainted kernel in my environment.

Changed in linux (Ubuntu):
status: Incomplete → New
summary: - [2.5] Composing a VM with 2048 MB RAM causes kernel panic
+ Composing a VM in MAAS with exactly 2048 MB RAM causes kernel panic
tags: removed: ui
summary: - Composing a VM in MAAS with exactly 2048 MB RAM causes kernel panic
+ Composing a VM in MAAS with exactly 2048 MB RAM causes the VM to kernel
+ panic
Ryan Harper (raharper) wrote :

Can you attach the guest xml and host kernel/qemu/libvirt packages?

Changed in linux (Ubuntu):
status: New → Incomplete
Mike Pontillo (mpontillo) wrote :

Here's an example of working XML (with 2047 MB RAM) that MAAS generated:

And here's an example of non-working XML (with 2048 MB RAM) that MAAS generated:

Changed in linux (Ubuntu):
status: Incomplete → Confirmed
Mike Pontillo (mpontillo) wrote :

Here's the version information.

  Installed: 4.0.0-1ubuntu8.5
  Candidate: 4.0.0-1ubuntu8.5
  Version table:
 *** 4.0.0-1ubuntu8.5 500
        500 bionic-updates/main amd64 Packages
        100 /var/lib/dpkg/status
     4.0.0-1ubuntu8.2 500
        500 bionic-security/main amd64 Packages
     4.0.0-1ubuntu8 500
        500 bionic/main amd64 Packages

  Installed: 1:2.11+dfsg-1ubuntu7.6
  Candidate: 1:2.11+dfsg-1ubuntu7.6
  Version table:
 *** 1:2.11+dfsg-1ubuntu7.6 500
        500 bionic-updates/main amd64 Packages
        100 /var/lib/dpkg/status
     1:2.11+dfsg-1ubuntu7.3 500
        500 bionic-security/main amd64 Packages
     1:2.11+dfsg-1ubuntu7 500
        500 bionic/main amd64 Packages

  Installed: 1:2.11+dfsg-1ubuntu7.6
  Candidate: 1:2.11+dfsg-1ubuntu7.6
  Version table:
 *** 1:2.11+dfsg-1ubuntu7.6 500
        500 bionic-updates/main amd64 Packages
        100 /var/lib/dpkg/status
     1:2.11+dfsg-1ubuntu7.3 500
        500 bionic-security/main amd64 Packages
     1:2.11+dfsg-1ubuntu7 500
        500 bionic/main amd64 Packages

  Installed: 4.15.0-34.37
  Candidate: 4.15.0-34.37
  Version table:
 *** 4.15.0-34.37 500
        500 bionic-updates/main amd64 Packages
        500 bionic-security/main amd64 Packages
        100 /var/lib/dpkg/status
     4.15.0-34.37~16.04.1 500
        500 xenial-updates/main amd64 Packages

Ryan Harper (raharper) wrote :

And /var/log/libvirt/qemu/<guestname>.log ?

Mike Pontillo (mpontillo) wrote :

Here's the log from the failing VM. Doesn't look too unusual to me...

description: updated
Ryan Harper (raharper) wrote :

The backing image:


What boot image is that? Can I get a copy of that from maas-images? or how is it created?

On the node with the vm that fails, can you:

virsh start <vm-name> --console

Assuming it's a normal ubuntu image which has normal console= settings, it should dump the boot console to the terminal so we can capture the full boot to panic.

Ryan Harper (raharper) wrote :

I'm unable to recreate with a daily bionic cloud-image on a bionic host with the same versions.

% sudo apt install uvtool libvirt
% uvt-simplestreams-libvirt -vv sync --source 'supported=True' arch=amd64 release=bionic
% uvt-kvm create --memory 2048 --cpu 1 --disk 10 rharper-b1 label=daily release=bionic
% virsh dumpxml rharper-b1 | grep Mem
  <currentMemory unit='KiB'>2097152</currentMemory>

Mike Pontillo (mpontillo) wrote :

It's an empty image - MAAS PXE boots the VM.

Could you give it a try with MAAS? I can help you with the setup if needed - just ping me on IRC.

Mike Pontillo (mpontillo) wrote :

Here's a full console log from the failure.

As Ryan I can not reproduce locally - hrm.

The crash in your log is the root-fs mount.

[ 22.524541] VFS: Cannot open root device "squash:" or unknown-block(0,0): error -6
[ 22.575588] Please append a correct "root=" boot option; here are the available partitions:
[ 22.583909] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)

Also we have to stick to exactly your values (one of the repros had a slightly different value)
  <memory unit='KiB'>2096128</memory>
  <currentMemory unit='KiB'>2096128</currentMemory>

I tried with the exact numbers above but "normal" cloud image boot is still ok.

I wonder if the kernel has an off by one error e.g. aligning the squash at the lowest 2G but with just this amount of memory choosing a place it would not fit.

We'd need to set up a local http and serve squashfs, to boot into that.
With some luck we can reproduce there and then eliminate libvirt and maas out of the equation.

- I started off as Ryan did with a Cloud Image test via UVTool.
- Next I extracted the kernel+initrd from the guest to provide those from the host (as you do via PXE)
- installed nginx
- made initrd available on /var/www/html/boot-initrd (initrd.img-4.15.0-36-generic)
- made kernel available on /var/www/html/boot-kernel (vmlinuz-4.15.0-36-generic)
- The address of the Host on the libvirt net is, verify the guest can http from there
- get matching squash (see below for details)
- get an empty qemu disk via qemu-img like the type raw that maas uses
  sudo qemu-img create -f raw /var/lib/libvirt/images/empty-root.img 10G
- With that, modify the guest to use these kernel/initrd/sqashfs/empty-root

XML of the guest:

I have a dependency issue for my repro, that is IP being configured in /scripts/init-bottom after trying to mount squashfs in what seems /scripts/local-premount.
I can even fetch the squashfs from the initramfs, wouldn't you be affected by the same ordering issue? I need to find how you usually get around that to continue the repro that hopefully eventually helps to focus on the root cause.

I experimented a bit more and asked around on IRC.
But so far I can't get past the ordering issue that IP is initialized to late and due to that squash is failing.

-- Appendix --

Get Squash:
To continue I'd need the current squashfs instead of the disk image.
My uvtool spawned this for me:
$ uvt-simplestreams-libvirt --verbose query
release=bionic arch=amd64 label=daily (20181012)
So lets get the mathcing suqash URL and fetch that.
$ sstream-query --output-format="%(item_url)s" --no-verify arch=amd64 release=bionic label=daily ftype=squashfs version_name=20181012
$ sudo wget -O /var/www/html/squashfs

Note: That setup is available on server horsea

After DHCP is up it works just fine.

(initramfs) wget
Connecting to (
squashfs 100% |*******************************| 174M 0:00:00 ETA
(initramfs) mount -t squashfs squashfs /root
(initramfs) mount
rootfs on / type rootfs (rw)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=1007984k,nr_inodes=251996,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=204128k,mode=755)
tmpfs-root on /media/root-rw type tmpfs (rw,relatime)
copymods on /root/lib/modules type tmpfs (rw,relatime)
/dev/loop0 on /root type squashfs (ro,relatime)

So if anyone has a good hint how to get out of the ip/squash-root ordering issue let me know.
That would - as mentioned - help to most likely exclude maas and libvirt from the suspects to be able to debug further.

Working on this I found by accident that I actually can reproduce:
  VFS: Cannot open root device "squash:" or unknown-block(0,0): error -6

But the way I got there lets assume some more potential reasons.
I got there by breaking my initramfs :-)

After realizing this I removed the initramfs from the guest definition and got to just the same error.

The reason is that without initramfs the kernel is responsible to handle root= and it has no idea of squashfs.

With that knowledge I re-checked your log at:
It also has no entry like:
  Loading, please wait...
  starting version 237
Which you'd see if systemd in the initrd would take over.
So your bad case also fails to load the initramfs!

That said, why could that be special for just this memory size?

Theories related to the guest size impacting this:
- initrd is placed explicit in PXE config now conflicting with kernel allocations
- initrd is misplaced/misread by PXE code in qemu

@Mike - I'd want to know your exact PXE config
@Mike - It would be great to attach your kernel+initrd+squashfs+rootdisk files to the bug.

Hopefully we can reproduce by providing kernel+initrd via PXE and varying the guest size.

Changed in qemu (Ubuntu):
status: New → Incomplete
Ryan Harper (raharper) wrote :

[ 0.943808] Unpacking initramfs...
[ 20.690329] Initramfs unpacking failed: junk in compressed archive
[ 20.703673] Freeing initrd memory: 56612K

Looks like the initrd was compromised, possibly a networking hiccup? Can you confirm the checksums on the source and attempt to download the URL ?

I can't see why the size of the VMs ram makes a difference though. But I don't think this is a qemu issue any more.

Since it is reproducible maybe not a networking hiccup.
But maybe a hiccup of the PXE setup (thats why I asked for it).
Or a hiccup of the quite complex shim loading if signed kernels are used.

@Mike - in addition to my questions above could you use a non signed/shim load pattern as well to check if that might be part of the reason?

Changed in maas:
milestone: 2.5.0rc1 → none
Mike Pontillo (mpontillo) wrote :

I agree that this doesn't look like a QEMU issue, and I agree that it doesn't look like a [general] networking hiccup.

@paelzer, the easiest way to get the exact configs you need would be to install MAAS similar to how I've described on our discourse forum[1]. The PXE config will be /similar/ to this[2] (copied/pasted from /usr/lib/maas/maas-test-enlistment on my MAAS server).

[1]: MAAS setup instructions

[2]: PXE config

Changed in qemu (Ubuntu):
status: Incomplete → Invalid
Ryan Harper (raharper) wrote :

Does this fail with other releases? like trusty? I was wondering if initrd size plays a factor here:

precise/hwe-t: 25M
trusty/hwe-x: 35M
xenial/ga: 39M
xenial/hwe: 53M
xenial/edge: 53M
bionic/ga: 55M
cosmic/ga: 57M

That might be faster for you to test than for us to replicate setup.

Mike Pontillo (mpontillo) wrote :

Good idea @rharper. It's easy for MAAS to attempt commissioning on Xenial or Bionic, so I gave Xenial a try. It works fine![1]

[1]: Console log -

Mike Pontillo (mpontillo) wrote :

I just tried Xenial with the HWE kernel - same result (success). FYI.

Mike Pontillo (mpontillo) wrote :

... it's interesting that [practically?] the same kernel version that fails consistently with Bionic works just fine with Xenial.

I must admit, compared to you just sharing your kernel/initrd/squash asking to install an own dev maas in comment #23 consumes quite some time :-/
Waiting for a sync here, setting up users there, sorting out how to make us provide PXE and such on the "maas" virtual network, ... - it just isn't one click and ready.

Initially things worked other than the unexpected "wait for image sync".
I've got a Pod registered and working to get to libvirt data.

But compose blocks at:
"Pod unable to compose machine: Please add a 'default' or 'maas' network whose bridge is on a MAAS DHCP enabled VLAN. Ensure that libvirt DHCP is not enabled."

Well that is clear, but a link where/how to get MAAS dhcp/pxe to own that vlan would be nice.
Maybe something small and easy for you that would help to be added.

When I go to subnets to create one (to have maas own it) for that I created following your link it tells me the subnet already exists "Error: Subnet with this Cidr already exists." - but it isn't in the subnet overview so I can't enable DHCP/PXE on it :-/

My overview suffers a bit from the links you had adding sample-data, I cleaned up a lot of demo-BS and can now see more clearly. I'd want to add a subnet to my fabric that I called "libvirt-maas" being the virbr1 "maas" definition by libvirt.
Afterwards I still get "Error: Subnet with this Cidr already exists."

Well it might conflict with the "" set up by.
Lets try to set up another subnet, that worked.
Also added a range of IPs in the hope that this would switch DHCP on, but it didn't

So on the subnet's own page "managed allocation" is enabled.
But when clicking on the top menu subnets the column for DHCP in the table is "disabled"

Anyway, lets try to compose something ...
No, still blocked at "Pod unable to compose machine: Please add a 'default' or 'maas' network whose bridge is on a MAAS DHCP enabled VLAN. Ensure that libvirt DHCP is not enabled."

That exceeds what I can find on [2] for DHCP enabling, but still refuses me.
Since reusing maas DHCP/PXE setup is exactly what I wanted to debug for/with you I'm stuck, lost time and we are not a bit further on this bug here :-/

Now trying to squeeze your PXE config into tftp/libvirt manually without MAAS - lets see if I can reproduce via that.


I didn't let me calm down - I found in the doc [1] that the switch might be on the VLAN and not the subnet (Hrm why?)

It was on the vlan
Not on the subnet

There I found the "provide dhcp" entry \o/
That unlocked some understanding of what the range allocations would be like on vland/subnets.
I reconfigured the range allocation and hit the "provide DHCP" switch.

Looking back all is reasonable, just some stumbling on my first maas setup from scratch :-/
Must be funny for you seeing me struggle doing so :-)

With that in place I created three pods with 2047/2048/2049 MB memory.
ubuntu@node-horsea:~/maas$ virsh dumpxml horsea-kvm-pod-2047 | grep emor
  <memory unit='KiB'>2096128</memory>
  <currentMemory unit='KiB'>2096128</currentMemory>
ubuntu@node-horsea:~/maas$ virsh dumpxml horsea-kvm-pod-2048 | grep emor
  <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>
ubuntu@node-horsea:~/maas$ virsh dumpxml horsea-kvm-pod-2049 | grep emor
  <memory unit='KiB'>2098176</memory>
  <currentMemory unit='KiB'>2098176</currentMemory>

They all went into the commissioning phase, but who would be surprised got stuck somewhere in there :-/
OTOH the bug says "composing crashes" so we might be at the right place, lets take a look at these three guests.


I didn't see any guests crashing, instead they all just hang finding nothing to boot.

I reduced this to just qemu (for debugging later on) but it reliably breaks finding nothing from PXE.

Guest Console:
  iPXE (PCI 00:03.0) starting execution...ok
  iPXE initialising devices...ok
  iPXE 1.0.0+git-20180124.fbe8c52d-0ubuntu2.1 -- Open Source Network Boot Firmware

  net0: 52:54:00:3b:72:0a using virtio-net on 0000:00:03.0 (open)
    [Link:up, TX:0 TXE:0 RX:0 RXE:0]
  Configuring (net0 52:54:00:3b:72:0a).................. No configuration methods
  succeeded (
  No more network devices

Corresponding command to start qemu:
/usr/bin/qemu-system-x86_64 -name guest=horsea-kvm-pod-2047,debug-threads=on -machine pc-i440fx-bionic,accel=kvm,usb=off,dump-guest-core=off -m 2047 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -cpu kvm64 -rtc base=utc -no-shutdown -curses \
-chardev socket,id=charmonitor,path=/tmp/horsea-kvm-pod-2047.monitor.sock,server,nowait \
-mon chardev=charmonitor,id=monitor,mode=control \
-chardev stdio,mux=on,id=charserial0,logfile=/tmp/horsea-kvm-pod-2047.log \
-device isa-serial,chardev=charserial0,id=serial0 \
-drive file=/var/lib/uvtool/libvirt/images/d9658e2f-d4c7-4a55-8ecc-b216612fe410,format=raw,if=none,id=drive-virtio-disk0 \
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,serial=d9658e2f-d4c7-4a55-8ecc-b216612fe410 \
-boot order=n,strict=on \
-net bridge,br=virbr1,helper=/usr/lib/qemu/qemu-bridge-helper -net nic,model=virtio,macaddr=52:54:00:3b:72:0a

This is the mac that is also in the libvirt XML.
The bridge is the one served by MAAS and it seems there is just no PXE responding to it.

Since the two other guests as started by maas+libvirt hang just the same way it most likely is a general MAAS setup issue.
Please help me to get MAAS to reply the way you would usually expect it, to then be able to go on with debugging.

Download full text (3.3 KiB)

Taking the x86 pxelinux.0 from /usr/lib/PXELINUX/pxelinux.0 and otherwise doing the same tftp setup as in [1] and switching from the "maas" to the "default" network where I have set up dhcp to netboot by libvirt as in [1]. I can see netboot happening.

With that instead of pushing things from libvirt via kernel/initrd tags I now provide it via PXE similar to your config.

- tftp serves: lpxelinux.0 + pxe modules + PXEconfig
- nginx serves: kernel/initrd/squashfs
- qemu started directly without libvirt

Still gets the inintial kernel booting fine and then failing to mount the squash:
mount: mounting squash: on /root failed: No such device

I can from the initramfs mount it:
(initramfs) wget
Connecting to (
squashfs 100% |*******************************| 174M 0:00:00 ETA
(initramfs) mount -t squashfs squashfs /root/
It is mounted just fine:
/dev/loop0 on /root type squashfs (ro,relatime)

It is possible that I'm back at the same ordering issue that I had of the IP coming up too late to mount the squashfs, but later works fine.

I mounted it with 2047/2048/2049 mb gusts without a problem.

Despite all work on this I'd still need a better reproducer :-/
I might take a look at rebuilding a more verbose initramfs for that after lunch.

--- config details ---

$ find /srv/tftp/

Kernel/initrd/squash as described in comment #18

# modified to get scrollable direct console (but no iPXE VGA output)
/usr/bin/qemu-system-x86_64 -name guest=horsea-kvm-pod-2047,debug-threads=on -machine pc-i440fx-bionic,accel=kvm,usb=off,dump-guest-core=off -m 2047 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -cpu kvm64 -rtc base=utc -no-shutdown \
-nographic -serial mon:stdio \
-drive file=/var/lib/uvtool/libvirt/images/d9658e2f-d4c7-4a55-8ecc-b216612fe410,format=raw,if=none,id=drive-virtio-disk0 \
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,serial=d9658e2f-d4c7-4a55-8ecc-b216612fe410 \
-boot order=n,strict=on \
-net bridge,br=virbr0,helper=/usr/lib/qemu/qemu-bridge-helper -net nic,model=virtio,macaddr=52:54:00:3b:72:0a
# For access to the iPXE / lpxelinux graphical output
#-curses \
#-chardev stdio,mux=on,id=charserial0,logfile=/tmp/test.log \
#-serial chardev:charserial0 \
#-mon chardev=charserial0,mode=readline \
# For access to a scrollable direct console and monitor
# -nographic -serial mon:stdio

$ cat /srv/tftp/pxelinux.cfg/default
DEFAULT execute

LABEL execute
  SAY Booting under MY direction...
  APPEND console=ttyS0 nomodeset ro root=squash: ip=::::horsea-kvm-pod-2047:BOOTIF ip6=off overlayroot=tmpfs...


There actually is no need to debug further on my non-maas PXE setup.

As identified before - your setup breaks due to
[ 20.690329] Initramfs unpacking failed: junk in compressed archive
Everything else is a follow on issue, with eventually:
[ 22.524541] VFS: Cannot open root device "squash:" or unknown-block(0,0): error -6

But since my setup as outlined in comment #31 PXE-boots the kernel+initramfs just fine I don't have to debug the rest.
I already passed the point where it breaks for you.

Can you use the above hints to get your breaking setup to also one-by-one remove components.
I'd assume that removing libvirt makes no difference to your case.
But you could maybe end up with:
- PXE-served by MAAS = bad
- PXE serverd by tftpd = good
Or anything like it.

So please follow the hints above (or get in touch if you need me to do so) and convert your setup until you can identify which component makes the decision for good/bad case.

Also still I'd be open to get your kernel/initramfs/squash/pxe/.... files to implant it into my setup for a test if any of them is the trigger.

Changed in linux (Ubuntu):
status: Confirmed → Incomplete
Changed in maas:
status: Invalid → Incomplete
Changed in qemu (Ubuntu):
status: Invalid → Incomplete
Mike Pontillo (mpontillo) wrote :

After triaging this again on a call with Andres (who originally reported this) this morning, we determined that this issue is no longer reproducible with MAAS. The only way for me to explain it at this point was that there was something wrong with the daily image MAAS was using last week, and a subsequent update fixed it. (It looks like the images were updated yesterday.)

Sorting my /var/lib/maas/boot-resources/cache directory by last-modified, I see the following:

@paelzer, if you'd like to test with any of those images, I've made copies. (Not sure where to put them, though - I wonder if your MAAS synced them as well?)

Still very weird that it worked in all cases we tested except when we allocated 2048 MB RAM.

I'd like to thank Ryan and Christian for their efforts on this "heisenbug". We should think about how to better handle issues like this, so that it's easier to peel back the layers and get to a point where everyone's environment can be consistent without this much effort. And we'll reopen this if it returns.

Mike Pontillo (mpontillo) wrote :

Looking again at the date stamps, I don't see any squashfs filesystems older than October 15th. The kernels are all from ~September 25th. So I feel like this must have been an interaction between the kernel September 25th and whatever the previous squashfs was.

thanks for your feedback.
We are at least much closer to what happened and thereby should be faster if it reoccurs.
I don't think it is an interaction of the Kernel and Squashfs as we found that already initramfs unpack was broken. That is before Squash comes into play.

I'll keep my test setup until I need to re-deploy the host, which usually is about every 1-2 weeks.
If you ever find old kernel/initrd combinations or any new one to trigger it again - please share them via e.g. internal private fileshare - I sent you some details on IRC.

Lets see if it comes up again.

Vladimir Grevtsev (vlgrevtsev) wrote :
Changed in maas:
status: Incomplete → Confirmed
Changed in linux (Ubuntu):
status: Incomplete → New
Changed in qemu (Ubuntu):
status: Incomplete → New
tags: added: cpe-onsite
tags: added: field-medium
Changed in maas:
status: Confirmed → Incomplete
Vladimir Grevtsev (vlgrevtsev) wrote :

Subscribing field-medium as workaround (e.g not to use 2048mb ram VMs) exists, but solution still need to be found.

Also, before yesterday (e.g on libvirt 1:2.11+dfsg-1ubuntu7.8 + maas 2.5rc2) worked fine.

Andres Rodriguez (andreserl) wrote :

Setting this back to incomplete for MAAS because there is not enough information to determine this is a MAAS Bug. That said, given that this work with any VM other than one with 2048 of RAM, the issue may appear to be the kernel or libvirt.

Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in linux (Ubuntu):
status: New → Confirmed
Changed in qemu (Ubuntu):
status: New → Confirmed

Again it seems to break on initramfs being bad:
[ 0.840153] Initramfs unpacking failed: no cpio magic

@Mike/Andres - can you reproduce it on your own test env this time?

@Vladimir - could you internally share exactly your kernel and initrd that is tried to be booted in this case? Last time we wondered if the issue could be "in there" so it would be great to have exactly those that fail for you shared to try reproducing on them.

One other oddity in the xml is the cgroup construction in the "bad" case.


> To manage notifications about this bug go to:

@Ryan - the resource and other bits being different is because in the bad case the domain was up and in the good case down. That in itself is not a problem.

Reproduced today on MAAS 2.5 latest bionic

It is not reproduced if switch from ga-18.04 kernel to ga--18.04-low-latency kernel.

Thiago Martins (martinx) wrote :

I'm also facing this problem.

My workaround is to compose VMs using Virt-Manager, Firmware = UEFI for the VM and then, refreshing the MaaS Pod.

There is a need to install ovmf

sudo apt install ovmf

On MaaS Pod.

Then, no more Kernel Panic!


Ryan Harper (raharper) wrote :

Can you attach your guest XML that's working successfully with 2048MB?

Thiago Martins (martinx) wrote :


 The working XML file is attached here, with 2048 MB of RAM.

 NOTE: This XML was created using Virt-Manager, then, MaaS took it over after being "refreshed".


Ryan Harper (raharper) wrote :

On Mon, Jan 14, 2019 at 1:55 PM Thiago Martins <email address hidden>

> @Ryan,
> The working XML file is attached here, with 2048 MB of RAM.
> NOTE: This XML was created using Virt-Manager, then, MaaS took it over
> after being "refreshed".


If you drop the <os> section (which is what tells libvirt to boot via UEFI)
does your VM still work?

<type arch="x86_64" machine="pc-i440fx-bionic">hvm</type>
<loader readonly="yes" type="pflash">/usr/share/OVMF/OVMF_CODE.fd</loader>

Note that MAAS boots UEFI images with grub2[1], not pxeboot/ipxe/seabios;
so I think we can narrow down the
error to the non-uefi case which may help find the issue.


> Cheers!
> Thiago
> ** Attachment added: "vunft-1.xml"
> --
> You received this bug notification because you are subscribed to qemu in
> Ubuntu.
> Title:
> Composing a VM in MAAS with exactly 2048 MB RAM causes the VM to
> kernel panic
> To manage notifications about this bug go to:

Thiago Martins (martinx) wrote :


 If I remove the UEFI line (back to BIOS), the machine enters in Kernel Panic during the boot / commissioning.

 That was actually, the very test that I executed in first place (add UEFI to see)! Then, I came with this UEFI workaround, which is even better for me.


Changed in maas:
importance: Critical → Undecided
Jason Hobbs (jason-hobbs) wrote :

Bumped to field-high as we ran into this again in testing.

We have a workaround, but it's to not use 2G VM's, which is really silly and hard to remember when we go and add new deployments, especially because the failure mode is not obvious at all.

thanks for chiming in - so far this was non-reproducible every time we looked at it. We had plenty of approaches with the MAAS team to this and later on with Thiago, but still miss the right data to continue.

Last time I asked - I think it was still Mike - that I'd really like to get the full set of binaries involved, but then IIRC he couldn't reproduce it with the newer images/kernel of that day. Maybe we can use this issue hitting you now as a new chance to get what we need to continue debugging to reach a resolution.

@Jason - can I get from you the full set of elements used when you hit it?
That would be:
- which release are you on?
- which qemu/libvirt/seabios/ovmf do you have installed exactly
- can you share the XML that the guest is defined with

We have the above from different cases, but it might be important to know which ones exactly are active in your case, so better check them. Finally - and that was what was missing before so far:
- can you attach the set of binaries that are used which should be
  - cloud-img
  - kernel (as maas provides that directly to the guest in guest XML)
  - initrd (as maas provides that directly to the guest in guest XML)
- describe your KVM Host server HW (so that we can test on something similar)

As an alternative - can you provide a login on such a host which has a breaking guest defined. There we could work together fetching the required data?

Jason Hobbs (jason-hobbs) wrote :


- release: bionic
- seabios: 1.10.2-1ubuntu1
- qemu: 1:2.11+dfsg-1ubuntu7.10
- libvirt: 4.0.0-1ubuntu8.8
- ovmf - this is a uefi thing right? we're not using it.

- kernel 2019-03-18T12:17:11+00:00 elastic-2 kernel: [ 0.000000] Linux version 4.15.0-46-generic (buildd@lgw01-amd64-038) (gcc version 7.3.0 (Ubuntu 7.3.0-16ubuntu3)) #49-Ubuntu SMP Wed Feb 6 09:33:07 UTC 2019 (Ubuntu 4.15.0-46.49-generic 4.15.18)

I don't have copies of the binaries from this run - it was from daily maas images:

2019-03-18T12:03:37.296671+00:00 leafeon maas.import-images: [info] Region downloading image descriptions from ''.

I don't see anything in the logs to indicate an ID number for the kernel, initrd, or image coming there.

Ok, thanks Jason.
That is great info, but about as much as we had so far :-/

Can you set your deployment up in a way that if that happens again you can collect these binaries?

If it right now still triggers if you (re)deploy a 2G guest use the chance to finally give us the bits that we will need to reach the next stage of debugging.

Lesson learned from before on this bug, it might already be gone by tomorrow, so I hope you have time and chance to get those xml+image+kernel+initrd.

Dmitrii Shcherbakov (dmitriis) wrote :

Just reproduced it on my env (where things used to work) after updating from MAAS 2.5.0~rc2 to 2.5.2.
[ 15.458594] VFS: Cannot open root device "squash:" or unknown-block(0,0): error -6

tree /var/lib/maas/boot-resources/
# ...

├── current -> snapshot-20190419-115735
└── snapshot-20190419-115735

Which binaries do I need to have uploaded?

sha256sum /var/lib/maas/boot-resources/snapshot-20190419-115735/ubuntu/amd64/ga-18.04/bionic/daily/*
69ca457a119fe309d315972ca2756a17bd9bc55bc98f2bea5542566a7f41b08f /var/lib/maas/boot-resources/snapshot-20190419-115735/ubuntu/amd64/ga-18.04/bionic/daily/boot-initrd
166853ad9342fdf5be17988e5e18cbf0458ab0da94f18f5e331b3581e3610b97 /var/lib/maas/boot-resources/snapshot-20190419-115735/ubuntu/amd64/ga-18.04/bionic/daily/boot-kernel
1e7841d7fca13ef27c2742cfc2c3a2d59491a74c268b705a02eb4ee8f673d150 /var/lib/maas/boot-resources/snapshot-20190419-115735/ubuntu/amd64/ga-18.04/bionic/daily/squashfs

ii seabios 1.10.2-1ubuntu1 all Legacy BIOS implementation
ii ipxe-qemu 1.0.0+git-20180124.fbe8c52d-0ubuntu2.2 all PXE boot firmware - ROM images for qemu
ii qemu-kvm 1:2.11+dfsg-1ubuntu7.12 amd64 QEMU Full virtualization on x86 hardware
ii libvirt0:amd64 4.0.0-1ubuntu8.8 amd64 library for interfacing with different virtualization systems

Dmitrii Shcherbakov (dmitriis) wrote :

echo -n 'BOOT_IMAGE= nomodeset ro root=squash: ip=::::maas-vhost6:BOOTIF ip6=off overlayroot=tmpfs overlayroot_cfgdisk=disabled cc:{'datasource_list': ['MAAS']}end_cc cloud-config-url= apparmor=0 log_host= log_port=5247 --- console=ttyS0,115200 initrd= BOOTIF=01-52-54-00-3f-ae-46' | wc -c


#define COMMAND_LINE_SIZE 2048
#define PARAM_SIZE 4096 /* sizeof(struct boot_params) */

Doesn't look like we are any close to the kernel limits on parameters.

However, the root argument as printed in the panic message looks like a 64-byte string (last byte for null termination):

echo -n 'squash:' | wc -c

It looks like this is coming from the following code (strlcpy into a 64-byte array):

static char __initdata saved_root_name[64];

static int __init root_dev_setup(char *line)
 strlcpy(saved_root_name, line, sizeof(saved_root_name));
 return 1;

__setup("root=", root_dev_setup);

And the overall code-path (judging by the md auto-detection log messages):
prepare_namespace ->

 if (saved_root_name[0]) {
  root_device_name = saved_root_name; // <---- (!) usage of a cut-down root param
  if (!strncmp(root_device_name, "mtd", 3) ||
      !strncmp(root_device_name, "ubi", 3)) {
   mount_block_root(root_device_name, root_mountflags);

Does not look like we are hitting either EACCES or EINVAL and so we fall through to panic():
void __init mount_block_root(char *name, int flags)
// ...
 for (p = fs_names; *p; p += strlen(p)+1) {
  int err = do_mount_root(name, p, flags, root_mount_data);
  switch (err) {
   case 0:
    goto out;
   case -EACCES:
   case -EINVAL:
   * Allow the user to distinguish between failed sys_open
   * and bad superblock on root device.
   * and give them a list of the available devices
  __bdevname(ROOT_DEV, b);
  printk("VFS: Cannot open root device \"%s\" or %s: error %d\n",
    root_device_name, b, err);
  printk("Please append a correct \"root=\" boot option; here are the available partitions:\n");

  printk("DEBUG_BLOCK_EXT_DEVT is enabled, you need to specify "
         "explicit textual name for \"root=\" boot option.\n");
  panic("VFS: Unable to mount root fs on %s", b);

Dmitrii Shcherbakov (dmitriis) wrote :

The kernel code path mentioned in #55 is only executed if there is no "early userspace init" - in other words, if there is no /init on initrd:

  * check if there is an early userspace init. If yes, let it do all
  * the work

 if (!ramdisk_execute_command)
  ramdisk_execute_command = "/init"; // <--- if there's no command specified, default to /init

 if (sys_access((const char __user *) ramdisk_execute_command, 0) != 0) { // <-- check if /init is present by doing the access syscall
  ramdisk_execute_command = NULL;
  prepare_namespace(); // <-- call prepare_namespace as mentioned in #55 which results in an error

However, I can see that the initrd used in my case contains the init script (so sys_access should be successful):


 lsinitramfs /var/lib/maas/boot-resources/current/ubuntu/amd64/generic/bionic/daily/boot-initrd | grep -P ^init$

If I increase the memory allocation from 2048 to 2049 MiB the machine starts to boot just fine.

Unsuccessful boot log (2048 MiB):
Successful boot log (2049 MiB):

Attached boot-initrd.

Dmitrii Shcherbakov (dmitriis) wrote :

I cannot reproduce the same with a xenial (GA kernel) image with 2048 MiB of RAM allocated to a VM.

So it seems to me that this is a kernel issue.

Dmitrii Shcherbakov (dmitriis) wrote :

Found something interesting.

Bionic + 2048 MiB of RAM (bad):

[ 1.520243] Unpacking initramfs...
[ 14.712821] Initramfs unpacking failed: broken padding
[ 14.723088] Freeing initrd memory: 56636K

Bionic + 2049 MiB of RAM (good):

[ 0.752624] Unpacking initramfs...
[ 5.572407] Freeing initrd memory: 56636K

Xenial HWE + 2048 MiB of RAM (bad):

[ 5.598647] Unpacking initramfs...
[ 84.494431] Initramfs unpacking failed: junk in compressed archive
[ 84.503565] Freeing initrd memory: 54564K

Dmitrii Shcherbakov (dmitriis) wrote :

Tested bionic-hwe - the issue does not occur with 2048 MiB.

The closest issue filed upstream I found is this:

Changed in qemu (Ubuntu):
status: Confirmed → Incomplete

@Dmitrii - we were at the "junk in ..." in comment #21 already.
The "broken padding" is interesting but might be the same issue with a different message as the newer kernel understands slightly more about it.

That it does only occur with Xenial-HWE but on the same initrd not with the base Xenial kernel is interesting as well. I agree that this is worth the kernel task on this bug (it was the same initrd in this test right)?.

The question is if the initrd really is broken (unlikely since it works with other mem sizes) or that it is broken by qemu/seabios/... when passed to the guest (likely).
Having such a *full set* of affected files would be great.

What we'd need is that you'd copy the kernel+initrd+image+xml used somewhere that we can reach and/or attach it here if it fits - this was asked of all reporters in comment #51 already.

I have seen you have attached the initrd of your case at least in comment #56 - thanks!
I'll taken another look at the issue with that initrd that you attached, will let you know if I need more.

Ran with:
- the cloud-image of Xenial of 20190419 [1]
- XML with a memory definition of:
  <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>
- kernel&initrd are from the host
- The attached broken initrd of comment #56
- Otherwise it is a uvtool kvm guest as it is created on Bionic
- Host is Bionic
- different kernels:
  - xenial
  - xenial hwe
  - bionic
  - bionic-hwe

Actually after above yielded no useful results I decided to even exclude libvirt from the equation for now (we don't even need a disk, just need to know if the kernel can extract the initrd):

$ sudo qemu-system-x86_64 -enable-kvm -m 2048M -nographic -serial mon:stdio -append 'console=ttyS0' -kernel bug-1797581-bionic-hwe -initrd bug-1797581-bad-initrd

All kernels loaded the attached broken initrd just fine.
:-/ no repro yet

- in the past we were unsure if this padding would be mangled by bootloaders, tfp or anything like it. Do you reproduce that locally or through maas still?
- The ramdisk that you attached is bionic it seems - is that correct?
- Please test the same without a disk, it should not matter and saves a lot of space when attaching files here.
- Would you mind attaching the full set of a broken case (kernel+initrd+xml)
- report the full commandline that the broken boot called qemu with?
- If you run this in maas or libvirt still, would you mind to step by step check if the same occurs
  a) without maas (only in libvirt)
  b) without libvirt (probably check which args e.g. -m it exactly passes to qemu)


Fro experimenting with sizes things can easily be padded with zeros:
$ truncate -s 80000000 bug-1797581-bad-initrd-extended
$ truncate -s 9000000 bug-1797581-xenial-base-extended

Boots just as well afterwards.

Dmitrii Shcherbakov (dmitriis) wrote :

tar -czvf boot-resources-20190419-115735-amd64-generic-bionic.tar.gz boot-resources/snapshot-20190419-115735/ubuntu/amd64/generic/bionic/

Dmitrii Shcherbakov (dmitriis) wrote :

The bionic/ga files from #63 need to be placed into both dirs:

1) /var/lib/maas/boot-resources/current/ubuntu/amd64/ga-18.04/bionic/daily
2) /var/lib/maas/boot-resources/current/ubuntu/amd64/generic/bionic/daily

The sha256 of the *initrd file* that triggers the issue is


Be careful with daily image auto-updates because a recent update have overridden the files for me and the issue was no longer possible to reproduce.

Dmitrii Shcherbakov (dmitriis) wrote :
Download full text (5.4 KiB)

Tried using direct kernel boot with QEMU and couldn't reproduce it:

sha256sum /mnt/libvirt-images/boot-resources-20190419-115735/snapshot-20190419-115735/ubuntu/amd64/generic/bionic/daily/boot-initrd
69ca457a119fe309d315972ca2756a17bd9bc55bc98f2bea5542566a7f41b08f /mnt/libvirt-images/boot-resources-20190419-115735/snapshot-20190419-115735/ubuntu/amd64/generic/bionic/daily/boot-initrd

30463 qemu-system-x86_64 -enable-kvm -name guest=maas-vhost6,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-224-maas-vhost6/master-key.aes -machine pc-q35-2.11,accel=kvm,usb=off,vmport=off,dump-guest-core=off -cpu Haswell-noTSX-IBRS,vme=on,ss=on,vmx=on,f16c=on,rdrand=on,hypervisor=on,arat=on,tsc_adjust=on,ssbd=on,xsaveopt=on,pdpe1gb=on,abm=on -m 2048 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid 399eae83-f059-4ac0-9609-5bb548f5a90a -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-224-maas-vhost6/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -global ICH9-LPC.disable_s3=1 -global ICH9-LPC.disable_s4=1 -boot strict=on -kernel /mnt/libvirt-images/boot-resources-20190419-115735/snapshot-20190419-115735/ubuntu/amd64/generic/bionic/daily/boot-kernel -initrd /mnt/libvirt-images/boot-resources-20190419-115735/snapshot-20190419-115735/ubuntu/amd64/generic/bionic/daily/boot-initrd -append BOOT_IMAGE= nomodeset ro root=squash: ip=::::maas-vhost6:BOOTIF ip6=off overlayroot=tmpfs overlayroot_cfgdisk=disabled cc:{'datasource_list': ['MAAS']}end_cc cloud-config-url= apparmor=0 log_host= log_port=5247 --- console=ttyS0,115200 initrd= BOOTIF=01-52-54-00-3f-ae-46 -device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 -device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 -device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 -device pcie-root-port,port=0x15,chassis=6,id=pci.6,bus=pcie.0,addr=0x2.0x5 -device pcie-root-port,port=0x16,chassis=7,id=pci.7,bus=pcie.0,addr=0x2.0x6 -device i82801b11-bridge,id=pci.8,bus=pcie.0,addr=0x1e -device pci-bridge,chassis_nr=9,id=pci.9,bus=pci.8,addr=0x0 -device ich9-usb-ehci1,id=usb,bus=pcie.0,addr=0x1d.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pcie.0,multifunction=on,addr=0x1d -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pcie.0,addr=0x1d.0x1 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pcie.0,addr=0x1d.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.2,addr=0x0 -device virtio-scsi-pci,id=scsi1,bus=pci.4,addr=0x0 -device ahci,id=sata1,bus=pci.9,addr=0x1 -device ...


After a longer session on IRC this is now also for Dmitrii un-reproducible.
Lets summarize the current status for the next oen coming by:

Current ideas still are:
a) at 2G the placement of kernel/initrd is too close (as placed by pxelinux), then when the kernel unpacks it overwrites the initrd
b) maas pxe backend might add some bits in transmission that break it
=> both of the above might depend on there kernel/initrd size (which would be why some kernels/daily images are affected)

We would be back needing a case that reliable triggers outside of MAAS.
If affected read through the former comments
0. safe and attach here your kernel/initrd/xml to help reproducing it in some debuggable way
0. state the Host OS version and components libvirt/qemu/maas used
1. check if it does reproduce reliably for you (retry a few times)
2. try to do the same without PXE booting (so far always resolved the case)
   (see above for details)
3a. if you get it recreated without PXE report here how you did so
3b. if only with MAAS then try to modify sizes of the roms (comment #62)
... (per discussion)

@sfeole Thanks for the Dup hit that as well now - as all others I ask you to please help to catch the data needed to finally recreate and debug this.

From IRC:
[07:00] <cpaelzer> sfeole: please attach your guest XML, and the used initrd and kernel to the bug
[07:02] <cpaelzer> best would be also the pxeconfig that is provided

@MAAS Team
So far this only occurred with MAAS and all other tries to recreate worked.
I wonder if either TFTP or the PXE loader used could be the trigger of the bug.
Therefore (from IRC as well):
[07:02] <cpaelzer> roaksoax: on this bug one of the components we still have to verify if it is related is the way you provide the TFP
[07:02] <cpaelzer> could you outline on the bug how users once they hit the bug could extract that and attach it to the bug
[07:03] <cpaelzer> that way we can hopefully finally recreate the case

I tried so back in comment #31, but that was only trying to recreate your setup.
For PXEconfig (I was guessing) and TFTP (I used the normal tftpd package) we could still get more similar to your setup.

I'll add a section to the description of the bug what users are supposed to attach, it would be great if you could help to outline how users can get those.

@MAAS Team
[07:03] <cpaelzer> and IIRC you don't use tftpd or such but something maas internal for tftpd is that right - could you outline how one could use/setup this so that we can compare tftpd vs maas-tftpd as well?
That is an extra question to that above, if you could help here as well that would be great.

description: updated

Added to the description what we need from anyone affected.

@Maas Team see former comment for the requested guiding of users how to get this data.
The Maas task is back from incomplete to confirmed for providing these howto's

Changed in maas:
status: Incomplete → Confirmed
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.