Ability to control phys-bits through libvirt

Bug #1769053 reported by Daniel Axtens on 2018-05-04
38
This bug affects 5 people
Affects Status Importance Assigned to Milestone
QEMU
Undecided
Unassigned
libvirt
Confirmed
Undecided
libvirt (Ubuntu)
High
Unassigned
qemu (Ubuntu)
Undecided
Unassigned

Bug Description

Attempting to start a KVM guest with more than 1TB of RAM fails.

It looks like we might need some extra patches: https://lists.gnu.org/archive/html/qemu-discuss/2017-12/msg00005.html

ProblemType: Bug
DistroRelease: Ubuntu 18.04
Package: qemu-system-x86 1:2.11+dfsg-1ubuntu7
ProcVersionSignature: Ubuntu 4.15.0-20.21-generic 4.15.17
Uname: Linux 4.15.0-20-generic x86_64
ApportVersion: 2.20.9-0ubuntu7
Architecture: amd64
CurrentDesktop: Unity:Unity7:ubuntu
Date: Fri May 4 16:21:14 2018
InstallationDate: Installed on 2017-04-05 (393 days ago)
InstallationMedia: Ubuntu 16.10 "Yakkety Yak" - Release amd64 (20161012.2)
MachineType: Dell Inc. XPS 13 9360
ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-4.15.0-20-generic root=/dev/mapper/ubuntu--vg-root ro quiet splash transparent_hugepage=madvise vt.handoff=1
SourcePackage: qemu
UpgradeStatus: Upgraded to bionic on 2018-04-30 (3 days ago)
dmi.bios.date: 02/26/2018
dmi.bios.vendor: Dell Inc.
dmi.bios.version: 2.6.2
dmi.board.name: 0PF86Y
dmi.board.vendor: Dell Inc.
dmi.board.version: A00
dmi.chassis.type: 9
dmi.chassis.vendor: Dell Inc.
dmi.modalias: dmi:bvnDellInc.:bvr2.6.2:bd02/26/2018:svnDellInc.:pnXPS139360:pvr:rvnDellInc.:rn0PF86Y:rvrA00:cvnDellInc.:ct9:cvr:
dmi.product.family: XPS
dmi.product.name: XPS 13 9360
dmi.sys.vendor: Dell Inc.

Related branches

Daniel Axtens (daxtens) wrote :
Daniel Axtens (daxtens) wrote :

(I'm not trying to start this on my laptop, so ignore the uploaded files. They're just what apport-bug decided to include.)

Hi Daniel,
might I ask what you expect now?

The changes to seabios are not even upstream yet in git://git.seabios.org/seabios.git
The changes to qemu are neither upstream in git://git.qemu.org/qemu.git

The changes linked also are for a qemu way++ back in time (like pre trusty), so they just don't apply. Some of these changes are handled already, but different like the second qemu change of above mail is in qemu since "6c7c3c21 x86: implement la57 paging mode" which is qemu >=2.9.
That said - this one I could track, maybe the other changes are also upstream but in a way different form.

At least for myself I currently have no >1TB system to even try this - well I have done this on s390x and there it works fine already but you need x86 here.

Even when all of the above would be resolved, the mail above states that even if those are applied they still have issues when going >1TB.

I think you'd need a clear this is what I tried and this is what fails with a setup as simple as possible. If it fails in Ubuntu we can build a latest upstream build for you and if failing there we can work with upstream to resolve properly. From there we can think about the backportability of those changes. But the suggested "hey there are these patches, won't work.

Please don't get me wrong (I want to help), but so far this appears to me so far as a suggestion of a set of non-upstreamed, non-applicable, non-testable, non-working changes.
We need to better sort out how to handle this which is why I ask what you expect to happen now.

Download full text (4.2 KiB)

Hi Christian,

Sorry, I should have been a *lot* more clear.

I wanted to file the bug so that we have somewhere to figure out what needs
to be done and track the progress - trying to avoid it becoming something
we vaguely know about but don't ever do anything about.

Thanks so much for your analysis of the patches. I will dig in to the
upstream status and see where they're at with large memory guests.

I know we're missing test hardware. I will make some enquiries within the
team and see what can dig up, otherwise we have a customer that might be
able to run some tests.

So for now, the action items are:
 - I will hunt down a >1TB machine.
 - I will check what the progress of 1TB guests in upstream Qemu is.

Apologies again, and thanks for the pointers.

Regards,
Daniel

On Fri, May 4, 2018 at 5:44 PM, ChristianEhrhardt <
<email address hidden>> wrote:

> Hi Daniel,
> might I ask what you expect now?
>
> The changes to seabios are not even upstream yet in git://
> git.seabios.org/seabios.git
> The changes to qemu are neither upstream in git://git.qemu.org/qemu.git
>
> The changes linked also are for a qemu way++ back in time (like pre
> trusty), so they just don't apply. Some of these changes are handled
> already, but different like the second qemu change of above mail is in qemu
> since "6c7c3c21 x86: implement la57 paging mode" which is qemu >=2.9.
> That said - this one I could track, maybe the other changes are also
> upstream but in a way different form.
>
> At least for myself I currently have no >1TB system to even try this -
> well I have done this on s390x and there it works fine already but you
> need x86 here.
>
> Even when all of the above would be resolved, the mail above states that
> even if those are applied they still have issues when going >1TB.
>
> I think you'd need a clear this is what I tried and this is what fails
> with a setup as simple as possible. If it fails in Ubuntu we can build a
> latest upstream build for you and if failing there we can work with
> upstream to resolve properly. From there we can think about the
> backportability of those changes. But the suggested "hey there are these
> patches, won't work.
>
> Please don't get me wrong (I want to help), but so far this appears to me
> so far as a suggestion of a set of non-upstreamed, non-applicable,
> non-testable, non-working changes.
> We need to better sort out how to handle this which is why I ask what you
> expect to happen now.
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1769053
>
> Title:
> Cannot start a guest with more than 1TB of RAM
>
> Status in QEMU:
> New
> Status in qemu package in Ubuntu:
> New
>
> Bug description:
> Attempting to start a KVM guest with more than 1TB of RAM fails.
>
> It looks like we might need some extra patches:
> https://lists.gnu.org/archive/html/qemu-discuss/2017-12/msg00005.html
>
> ProblemType: Bug
> DistroRelease: Ubuntu 18.04
> Package: qemu-system-x86 1:2.11+dfsg-1ubuntu7
> ProcVersionSignature: Ubuntu 4.15.0-20.21-generic 4.15.17
> Uname: Linux 4.15.0-20-generic x86_64
> ApportVersion: 2.20.9-0ubuntu7
> ...

Read more...

Thanks for your clarification Daniel, I'll mark both tasks incomplete then until you come back with that data.

Changed in qemu (Ubuntu):
status: New → Incomplete
Changed in qemu:
status: New → Incomplete

Interesting; I thought this was supposed to work.
I know we (RH) have some downstream patches for >1TB RAM, but the last I'd heard they weren't supposed to be necessary any more, except for compatibility with old versions.

It's probably worth checking the guest view fo the CPUs physical address bits and making sure it's no bigger than the host (phys-bits=n or host-phys-bits=true on the -cpu)
QEMU often defaults to 40 bits and things get confusing.

Note also you can do some 1TB+ tests on smaller machines as long as they have a large enough address size on the host CPUs. Tricks like adding empty hot-plug DIMM slots leaving a 1TB hole can tickle some bugs. Even adding 1TB of swap to your host and being careful with your guest can work :-)

Dan Streetman (ddstreet) wrote :

You don't need a >1TB host to spin up a >1TB guest. Unless you're using pci passthru (and/or SRIOV), or something else that requires qemu to alloc and pin all guest mem, you can simply overcommit; normal guests don't require mem pre-allocation or pinning.

On your host do this to allow overcommitting such a large amount (this allows 16T but can be adjusted as needed):

$ echo $[ 16 * 1024 * 1024 * 1024 ] | sudo tee /proc/sys/vm/overcommit_kbytes
17179869184
$ echo 1 | sudo tee /proc/sys/vm/overcommit_memory
1

Then just virsh edit your guest to use >1TB, e.g.:

  <memory unit='GiB'>1500</memory>

And of course, stop and restart the guest to pick up the xml change.

Dan Streetman (ddstreet) wrote :

BTW this is the stacktrace I get from a Xenial guest on Xenial host:

[ 0.000000] BUG: unable to handle kernel paging request at ffffc90000000004
[ 0.000000] IP: [<ffffffff81f7dc60>] hpet_enable.part.13+0x23/0x2a5
[ 0.000000] PGD 171629ab067 PUD 171629ac067 PMD 171629ad067 PTE 80000000fed00073
[ 0.000000] Oops: 0009 [#1] SMP
[ 0.000000] Modules linked in:
[ 0.000000] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.4.0-122-generic #146-Ubuntu
[ 0.000000] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014
[ 0.000000] task: ffffffff81e13500 ti: ffffffff81e00000 task.ti: ffffffff81e00000
[ 0.000000] RIP: 0010:[<ffffffff81f7dc60>] [<ffffffff81f7dc60>] hpet_enable.part.13+0x23/0x2a5
[ 0.000000] RSP: 0000:ffffffff81e03ef0 EFLAGS: 00010282
[ 0.000000] RAX: ffffc90000000000 RBX: ffffffffffffffff RCX: 0000000000000000
[ 0.000000] RDX: 0000000000000000 RSI: 0000000000000100 RDI: 0000000000000000
[ 0.000000] RBP: ffffffff81e03f10 R08: 000000000001ad50 R09: 00000000000001f0
[ 0.000000] R10: ffff89773fa20000 R11: 0000000000000001 R12: ffff89773f99f6c0
[ 0.000000] R13: ffffffff8200e920 R14: ffffffff8201c2e0 R15: ffffffff81e03fa8
[ 0.000000] FS: 0000000000000000(0000) GS:ffff897162c00000(0000) knlGS:0000000000000000
[ 0.000000] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[ 0.000000] CR2: ffffc90000000004 CR3: 0000000001e0a000 CR4: 0000000000000630
[ 0.000000] Stack:
[ 0.000000] ffffffffffffffff ffff89773f99f6c0 ffffffff8200e920 ffffffff8201c2e0
[ 0.000000] ffffffff81e03f20 ffffffff81f7df00 ffffffff81e03f30 ffffffff81f6ee7a
[ 0.000000] ffffffff81e03f40 ffffffff81f6ee4a ffffffff81e03f80 ffffffff81f63f71
[ 0.000000] Call Trace:
[ 0.000000] [<ffffffff81f7df00>] hpet_enable+0x1e/0x20
[ 0.000000] [<ffffffff81f6ee7a>] hpet_time_init+0x9/0x19
[ 0.000000] [<ffffffff81f6ee4a>] x86_late_time_init+0x10/0x17
[ 0.000000] [<ffffffff81f63f71>] start_kernel+0x3d8/0x4aa
[ 0.000000] [<ffffffff81f63120>] ? early_idt_handler_array+0x120/0x120
[ 0.000000] [<ffffffff81f63339>] x86_64_start_reservations+0x2a/0x2c
[ 0.000000] [<ffffffff81f63485>] x86_64_start_kernel+0x14a/0x16d
[ 0.000000] Code: 01 00 00 00 41 5c 5d c3 55 48 8b 3d 63 f4 18 00 be 00 04 00 00 48 89 e5 41 56 41 55 41 54 53 e8 f7 f2 0e ff 48 89 05 d8 f4 18 00 <8b> 48 04 b8 e9 03 00 00 48 8b 15 c9 f4 18 00 8b 52 10 ff c2 75
[ 0.000000] RIP [<ffffffff81f7dc60>] hpet_enable.part.13+0x23/0x2a5
[ 0.000000] RSP <ffffffff81e03ef0>
[ 0.000000] CR2: ffffc90000000004
[ 0.000000] ---[ end trace 404be15fe05aa681 ]---
[ 0.000000] Kernel panic - not syncing: Attempted to kill the idle task!
[ 0.000000] ---[ end Kernel panic - not syncing: Attempted to kill the idle task!

Dan Streetman (ddstreet) wrote :

And with non-massive mem (so the guest actually boots up), the guest does show only 40 bits of phys mem addressing, so qemu will definitely have to increase that to be able to provide >1TB of phys mem to the guest (assuming qemu doesn't adjust that dynamically based on the total mem provided to the guest)

ubuntu@largemem:~$ grep -m 1 'address sizes' /proc/cpuinfo
address sizes : 40 bits physical, 48 bits virtual

Ah right Dan, if you're seeing the 40 bits physical in the guest you definitely need to try the flags I suggest in comment 6; host-phys-bits=true should work for you.

> Interesting; I thought this was supposed to work.

Exactly that was my thought when triaging it initially
Furthermore I assume people working la57 (https://lwn.net/Articles/730925/) and such ran tests on much bigger sizes.

> Ah right Dan, if you're seeing the 40 bits physical in the guest you definitely need to try the flags I suggest in comment 6; host-phys-bits=true should work for you.

I tested Bionic to be at least on libvirt 4.0 / qemu 2.11.1 when we want to check things under the "supposed to work now" flag.

Defaults:
Host: address sizes : 46 bits physical, 48 bits virtual
Guest: address sizes : 40 bits physical, 48 bits virtual

I ensured that with option -cpu host,host-phys-bits=true set I successfully get what my host can provide in the guest:
Guest: address sizes : 46 bits physical, 48 bits virtual

Starting a guest with that >1TB (that would be mostly on swap if needed) works just fine as expected. Here ~1063 GB from /proc/meminfo
MemTotal: 1114676492 kB

I also checked a more compatible approach like -cpu qemu64,phys-bits=42 and that works as well.

IMHO - if anything - one could argue that libvirt/qemu could be smarter about e.g. auto adding those arguments (or print a warning) when crossing a certain memory size.

So for now I'd stick to the "actually works" summary and keep the status to incomplete.

Download full text (4.0 KiB)

* ChristianEhrhardt (<email address hidden>) wrote:
> > Interesting; I thought this was supposed to work.
>
> Exactly that was my thought when triaging it initially
> Furthermore I assume people working la57 (https://lwn.net/Articles/730925/) and such ran tests on much bigger sizes.

I assume so, but I've not looked at the detail of that.

> > Ah right Dan, if you're seeing the 40 bits physical in the guest you
> definitely need to try the flags I suggest in comment 6; host-phys-
> bits=true should work for you.
>
> I tested Bionic to be at least on libvirt 4.0 / qemu 2.11.1 when we want
> to check things under the "supposed to work now" flag.
>
> Defaults:
> Host: address sizes : 46 bits physical, 48 bits virtual
> Guest: address sizes : 40 bits physical, 48 bits virtual
>
> I ensured that with option -cpu host,host-phys-bits=true set I successfully get what my host can provide in the guest:
> Guest: address sizes : 46 bits physical, 48 bits virtual
>
> Starting a guest with that >1TB (that would be mostly on swap if needed) works just fine as expected. Here ~1063 GB from /proc/meminfo
> MemTotal: 1114676492 kB

OK, good - that suggests there's nothing missing.
We enable host-phys-bits=true by default I think (in our machine type?)

> I also checked a more compatible approach like -cpu qemu64,phys-bits=42
> and that works as well.
>
> IMHO - if anything - one could argue that libvirt/qemu could be smarter
> about e.g. auto adding those arguments (or print a warning) when
> crossing a certain memory size.

The problem is there are a whole bunch of things that are hard to deal
with:
  a) Cheaper CPUs tend to have smaller phys-bits even in the same
generation; e.g. my laptop is still 36 bits, a lot are 39 bits. I think
the same is true of the Xeon E3-.... family. It makes it hard to know
what to pick when you're going to allow migration.

  b) Reasoning about the total address size range is difficult; you've
got to take into account PCI address space and hot plug space etc
to know where the upper edge is.

Dave

> So for now I'd stick to the "actually works" summary and keep the status
> to incomplete.
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1769053
>
> Title:
> Cannot start a guest with more than 1TB of RAM
>
> Status in QEMU:
> Incomplete
> Status in qemu package in Ubuntu:
> Incomplete
>
> Bug description:
> Attempting to start a KVM guest with more than 1TB of RAM fails.
>
> It looks like we might need some extra patches:
> https://lists.gnu.org/archive/html/qemu-discuss/2017-12/msg00005.html
>
> ProblemType: Bug
> DistroRelease: Ubuntu 18.04
> Package: qemu-system-x86 1:2.11+dfsg-1ubuntu7
> ProcVersionSignature: Ubuntu 4.15.0-20.21-generic 4.15.17
> Uname: Linux 4.15.0-20-generic x86_64
> ApportVersion: 2.20.9-0ubuntu7
> Architecture: amd64
> CurrentDesktop: Unity:Unity7:ubuntu
> Date: Fri May 4 16:21:14 2018
> InstallationDate: Installed on 2017-04-05 (393 days ago)
> InstallationMedia: Ubuntu 16.10 "Yakkety Yak" - Release amd64 (20161012.2)
> MachineType: Dell Inc. XPS 13 9360
> Pro...

Read more...

Download full text (6.9 KiB)

On Tue, May 8, 2018 at 10:37 AM, Dr. David Alan Gilbert <<email address hidden>
> wrote:

> * ChristianEhrhardt (<email address hidden>) wrote:
> > > Interesting; I thought this was supposed to work.
> >
> > Exactly that was my thought when triaging it initially
> > Furthermore I assume people working la57 (https://lwn.net/Articles/
> 730925/) and such ran tests on much bigger sizes.
>
> I assume so, but I've not looked at the detail of that.
>
> > > Ah right Dan, if you're seeing the 40 bits physical in the guest you
> > definitely need to try the flags I suggest in comment 6; host-phys-
> > bits=true should work for you.
> >
> > I tested Bionic to be at least on libvirt 4.0 / qemu 2.11.1 when we want
> > to check things under the "supposed to work now" flag.
> >
> > Defaults:
> > Host: address sizes : 46 bits physical, 48 bits virtual
> > Guest: address sizes : 40 bits physical, 48 bits virtual
> >
> > I ensured that with option -cpu host,host-phys-bits=true set I
> successfully get what my host can provide in the guest:
> > Guest: address sizes : 46 bits physical, 48 bits virtual
> >
> > Starting a guest with that >1TB (that would be mostly on swap if needed)
> works just fine as expected. Here ~1063 GB from /proc/meminfo
> > MemTotal: 1114676492 kB
>
> OK, good - that suggests there's nothing missing.
> We enable host-phys-bits=true by default I think (in our machine type?)
>

Interesting approach, I see your comment about that already in [1] when it
was added.
I didn't realize some machine types were setting this already - I assume it
isn't the general default for migratebility to other hosts (like our 36/39
bit laptops).

I assume "we" in this context are RedHat downstream changes to the (some)
machine type(s)?
I see the benefit for huge guests to work without setting those properties,
but I wonder if that caused you trouble in regard to migrations?

[1]: https://patchwork.kernel.org/patch/9223999/

> > I also checked a more compatible approach like -cpu qemu64,phys-bits=42
> > and that works as well.
> >
> > IMHO - if anything - one could argue that libvirt/qemu could be smarter
> > about e.g. auto adding those arguments (or print a warning) when
> > crossing a certain memory size.
>
> The problem is there are a whole bunch of things that are hard to deal
> with:
> a) Cheaper CPUs tend to have smaller phys-bits even in the same
> generation; e.g. my laptop is still 36 bits, a lot are 39 bits. I think
> the same is true of the Xeon E3-.... family. It makes it hard to know
> what to pick when you're going to allow migration.
>
> b) Reasoning about the total address size range is difficult; you've
> got to take into account PCI address space and hot plug space etc
> to know where the upper edge is.
>

I agree that checking the total address size might have too much false
positives for all the complexities around "estimating" that size.
/me is giving up this idea :-)

> Dave
>
> > So for now I'd stick to the "actually works" summary and keep the status
> > to incomplete.
> >
> > --
> > You received this bug notification because you are subscribed to the bug
> > report.
> > https://bugs.launchpad.net/bugs/1769053
> >...

Read more...

Download full text (9.8 KiB)

* ChristianEhrhardt (<email address hidden>) wrote:
> On Tue, May 8, 2018 at 10:37 AM, Dr. David Alan Gilbert <<email address hidden>
> > wrote:
>
> > * ChristianEhrhardt (<email address hidden>) wrote:
> > > > Interesting; I thought this was supposed to work.
> > >
> > > Exactly that was my thought when triaging it initially
> > > Furthermore I assume people working la57 (https://lwn.net/Articles/
> > 730925/) and such ran tests on much bigger sizes.
> >
> > I assume so, but I've not looked at the detail of that.
> >
> > > > Ah right Dan, if you're seeing the 40 bits physical in the guest you
> > > definitely need to try the flags I suggest in comment 6; host-phys-
> > > bits=true should work for you.
> > >
> > > I tested Bionic to be at least on libvirt 4.0 / qemu 2.11.1 when we want
> > > to check things under the "supposed to work now" flag.
> > >
> > > Defaults:
> > > Host: address sizes : 46 bits physical, 48 bits virtual
> > > Guest: address sizes : 40 bits physical, 48 bits virtual
> > >
> > > I ensured that with option -cpu host,host-phys-bits=true set I
> > successfully get what my host can provide in the guest:
> > > Guest: address sizes : 46 bits physical, 48 bits virtual
> > >
> > > Starting a guest with that >1TB (that would be mostly on swap if needed)
> > works just fine as expected. Here ~1063 GB from /proc/meminfo
> > > MemTotal: 1114676492 kB
> >
> > OK, good - that suggests there's nothing missing.
> > We enable host-phys-bits=true by default I think (in our machine type?)
> >
>
> Interesting approach, I see your comment about that already in [1] when it
> was added.
> I didn't realize some machine types were setting this already - I assume it
> isn't the general default for migratebility to other hosts (like our 36/39
> bit laptops).
>
> I assume "we" in this context are RedHat downstream changes to the (some)
> machine type(s)?

That's right; you sohuld be able to find them if you dig around CentOS's
set.

> I see the benefit for huge guests to work without setting those properties,
> but I wonder if that caused you trouble in regard to migrations?

It could, although I don't remember any reports of people hitting it.
The problem is finding a better solution; that's why I added both the
host-phys-bits and the ability to set phys-bits= so that you can make
a smarter choice based on what hardware you actually have. Who or what
should make that smarter choice hasn't really ever been answered.

> [1]: https://patchwork.kernel.org/patch/9223999/

Prior to that patch set, QEMU had always been a fixed 40 bits, so I
didn't change the default behaviour with that set; I just let you change
it by adding the flags.
(As I remember TCG was hard coded as 40 bits in some places so didn't
want to break that either).

Dave

>
>
> > > I also checked a more compatible approach like -cpu qemu64,phys-bits=42
> > > and that works as well.
> > >
> > > IMHO - if anything - one could argue that libvirt/qemu could be smarter
> > > about e.g. auto adding those arguments (or print a warning) when
> > > crossing a certain memory size.
> >
> > The problem is there are a whole bunch of things that are hard to deal
> > with:
>...

Hmm, if we know that QEMU guests will crash & burn when > 1 TB mem, when host-phys-bits/phys-bits are unset, then perhaps libvirt should do the right thing by default here. eg we can't use host-phys-bits=true due to migration compat issues, but if we see > 1TB mem, libvirt could reasonably set phys-bits=NNN for some suitable value of NNN. We should expose this in the XML config for the CPU explicitly too.

* Daniel Berrange (<email address hidden>) wrote:
> Hmm, if we know that QEMU guests will crash & burn when > 1 TB mem, when
> host-phys-bits/phys-bits are unset, then perhaps libvirt should do the
> right thing by default here. eg we can't use host-phys-bits=true due to
> migration compat issues, but if we see > 1TB mem, libvirt could
> reasonably set phys-bits=NNN for some suitable value of NNN. We should
> expose this in the XML config for the CPU explicitly too.

Yep:
  a) It should be possible to add a setting to the XML to specify the
     phys-bits
  b) It should be possible for libvirt to check the host it's on can
     satisfy that requirement
  c) libvirt can check that if RAM > 2^phys-bits it can complain

but

  d) For smaller amount of RAM it might still fail if
RAM+rounding+pci+hotplug space goes over the limit.
     Figuring that limit out is tricky (and I thought it
     might be BIOS/EFI dependent as well depending where they
     decide to put their PCI devices)

Dave

> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1769053
>
> Title:
> Cannot start a guest with more than 1TB of RAM
>
> Status in QEMU:
> Incomplete
> Status in qemu package in Ubuntu:
> Incomplete
>
> Bug description:
> Attempting to start a KVM guest with more than 1TB of RAM fails.
>
> It looks like we might need some extra patches:
> https://lists.gnu.org/archive/html/qemu-discuss/2017-12/msg00005.html
>
> ProblemType: Bug
> DistroRelease: Ubuntu 18.04
> Package: qemu-system-x86 1:2.11+dfsg-1ubuntu7
> ProcVersionSignature: Ubuntu 4.15.0-20.21-generic 4.15.17
> Uname: Linux 4.15.0-20-generic x86_64
> ApportVersion: 2.20.9-0ubuntu7
> Architecture: amd64
> CurrentDesktop: Unity:Unity7:ubuntu
> Date: Fri May 4 16:21:14 2018
> InstallationDate: Installed on 2017-04-05 (393 days ago)
> InstallationMedia: Ubuntu 16.10 "Yakkety Yak" - Release amd64 (20161012.2)
> MachineType: Dell Inc. XPS 13 9360
> ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-4.15.0-20-generic root=/dev/mapper/ubuntu--vg-root ro quiet splash transparent_hugepage=madvise vt.handoff=1
> SourcePackage: qemu
> UpgradeStatus: Upgraded to bionic on 2018-04-30 (3 days ago)
> dmi.bios.date: 02/26/2018
> dmi.bios.vendor: Dell Inc.
> dmi.bios.version: 2.6.2
> dmi.board.name: 0PF86Y
> dmi.board.vendor: Dell Inc.
> dmi.board.version: A00
> dmi.chassis.type: 9
> dmi.chassis.vendor: Dell Inc.
> dmi.modalias: dmi:bvnDellInc.:bvr2.6.2:bd02/26/2018:svnDellInc.:pnXPS139360:pvr:rvnDellInc.:rn0PF86Y:rvrA00:cvnDellInc.:ct9:cvr:
> dmi.product.family: XPS
> dmi.product.name: XPS 13 9360
> dmi.sys.vendor: Dell Inc.
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/qemu/+bug/1769053/+subscriptions
--
Dr. David Alan Gilbert / <email address hidden> / Manchester, UK

  Hi,

> d) For smaller amount of RAM it might still fail if
> RAM+rounding+pci+hotplug space goes over the limit.
> Figuring that limit out is tricky (and I thought it
> might be BIOS/EFI dependent as well depending where they
> decide to put their PCI devices)

Both seabios and ovmf try to not go too high in address space. Reason
is exactly the phys-bits issue. Using 40 here by default does not only
limit the memory to 1TB. It also has the problem that the guest thinks
it has 1TB of address space but in reality it might be less. Even
recent skylake machines have phys-bits=39 (512G) only, and trying to use
the physical address space above 512G in the guest just doesn't work
because the phys-bits=39 limit applies to EPT too.

So checking phys-bits in the firmware, for example to place pci bars as
high as possible in physical address space, is not going to work.

IIRC ovmf uses a 32G sized region with 32G alignment by default, which
will land below 64G (aka phys-bits=36 address space) unless the guest
has more than 30 (q35) or 31 (piix4) GB of memory.

seabios will not map pci bars above 4G unless it runs out of space below
4G. If needed 64bit PCI bars will be placed right above ram, with
gigabyte alignment.

cheers,
  Gerd

Download full text (3.3 KiB)

* Gerd Hoffmann (<email address hidden>) wrote:
> Hi,
>
> > d) For smaller amount of RAM it might still fail if
> > RAM+rounding+pci+hotplug space goes over the limit.
> > Figuring that limit out is tricky (and I thought it
> > might be BIOS/EFI dependent as well depending where they
> > decide to put their PCI devices)
>
> Both seabios and ovmf try to not go too high in address space. Reason
> is exactly the phys-bits issue. Using 40 here by default does not only
> limit the memory to 1TB. It also has the problem that the guest thinks
> it has 1TB of address space but in reality it might be less. Even
> recent skylake machines have phys-bits=39 (512G) only, and trying to use
> the physical address space above 512G in the guest just doesn't work
> because the phys-bits=39 limit applies to EPT too.
>
> So checking phys-bits in the firmware, for example to place pci bars as
> high as possible in physical address space, is not going to work.
>
> IIRC ovmf uses a 32G sized region with 32G alignment by default, which
> will land below 64G (aka phys-bits=36 address space) unless the guest
> has more than 30 (q35) or 31 (piix4) GB of memory.
>
> seabios will not map pci bars above 4G unless it runs out of space below
> 4G. If needed 64bit PCI bars will be placed right above ram, with
> gigabyte alignment.

Yep, I was tempted to set host-phys-bits=true on upstream, but TCG
has a fixed 40 bits last time I looked.

Dave

> cheers,
> Gerd
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1769053
>
> Title:
> Cannot start a guest with more than 1TB of RAM
>
> Status in QEMU:
> Incomplete
> Status in qemu package in Ubuntu:
> Incomplete
>
> Bug description:
> Attempting to start a KVM guest with more than 1TB of RAM fails.
>
> It looks like we might need some extra patches:
> https://lists.gnu.org/archive/html/qemu-discuss/2017-12/msg00005.html
>
> ProblemType: Bug
> DistroRelease: Ubuntu 18.04
> Package: qemu-system-x86 1:2.11+dfsg-1ubuntu7
> ProcVersionSignature: Ubuntu 4.15.0-20.21-generic 4.15.17
> Uname: Linux 4.15.0-20-generic x86_64
> ApportVersion: 2.20.9-0ubuntu7
> Architecture: amd64
> CurrentDesktop: Unity:Unity7:ubuntu
> Date: Fri May 4 16:21:14 2018
> InstallationDate: Installed on 2017-04-05 (393 days ago)
> InstallationMedia: Ubuntu 16.10 "Yakkety Yak" - Release amd64 (20161012.2)
> MachineType: Dell Inc. XPS 13 9360
> ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-4.15.0-20-generic root=/dev/mapper/ubuntu--vg-root ro quiet splash transparent_hugepage=madvise vt.handoff=1
> SourcePackage: qemu
> UpgradeStatus: Upgraded to bionic on 2018-04-30 (3 days ago)
> dmi.bios.date: 02/26/2018
> dmi.bios.vendor: Dell Inc.
> dmi.bios.version: 2.6.2
> dmi.board.name: 0PF86Y
> dmi.board.vendor: Dell Inc.
> dmi.board.version: A00
> dmi.chassis.type: 9
> dmi.chassis.vendor: Dell Inc.
> dmi.modalias: dmi:bvnDellInc.:bvr2.6.2:bd02/26/2018:svnDellInc.:pnXPS139360:pvr:rvnDellInc.:rn0PF86Y:rvrA00:cvnDellInc.:ct9:cvr:
> dmi.product.family: XPS
> dmi.product.name: XPS 13 9360
> dmi.sys.vendo...

Read more...

David Coronel (davecore) on 2018-05-15
Changed in qemu (Ubuntu):
importance: Undecided → Critical

Crit prio on Qemu which was explained to work just fine is not correct IMHO.
After checking with David he meant to want to raise the prio on the suggested libvirt extensions instead. I'm re-triaging this bug for that and will ping David Berrange if work on this is already tracked on a libvirt-BZ or worked on in general.

Changed in libvirt (Ubuntu):
status: New → Triaged
importance: Undecided → High
Changed in qemu (Ubuntu):
importance: Critical → Undecided

Actually the qemu tasks are "invalid" not "incomplete" as they currently are - after our discussions here it seems we agreed that qemu is doing what is intended (and the reasons why larger bits are not the default). Therefore set the status to that for the qemu tasks.

Changed in qemu (Ubuntu):
status: Incomplete → Invalid
Changed in qemu:
status: Incomplete → Invalid

Description of problem:
Based on a discussion about Qemus ability to work with Guests >1TB [1] it was identified that it might be wise to have libvirt be able to:
  a) add a setting to the XML to specify the phys-bits
  b) It should be possible for libvirt to check the host it's on can
     satisfy that requirement (enough HW phys bits)
  c) libvirt can check that if RAM > 2^phys-bits it can complain

It is known that (c) can't catch all, as it might still fail if RAM+rounding+pci+hotplug space goes over the limit. Figuring that limit out is tricky and should not be part of the scope here.

[1]: https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1769053

Version-Release number of selected component (if applicable):
Up to latest 4.3

How reproducible:
100% - well it essentially is a feature request not an error

Steps to Reproduce:
1. try to control phys-bits through libvirt xml/api

Actual results:
No option exposed to do so.

Expected results:
Be able to control phys-bits

Additional info:
See the discussion on Launchpad [1] for more details of the qemu side of this.

Reported to upstream libvirt's BZ with the suggestions of Daniel Berrage and David Alan Glibert; now available at [1] I linked that up in the LP bug status so that we auto-track this.

As eventually this has to go upstream using the bug tracker should better ensure that there is no concurrent conflicting work (or opinion) on it.

[1]: https://bugzilla.redhat.com/show_bug.cgi?id=1578278

Since all but the libvirt task to expose these are set to invalid in regard to the issue here I'm changing the title accordingly.

As a short term solution for Ubuntu users I forked bug 1776189 to provide a machine type based solution until this here is implemented and widely available and exploited.

summary: - Cannot start a guest with more than 1TB of RAM
+ Ability to control phys-bits through libvirt
Changed in libvirt:
importance: Unknown → Undecided
status: Unknown → Confirmed
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.