Only 64 Memory regions or dimms supported for Hotplug.
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
| The Ubuntu-power-systems project |
Medium
|
Canonical Kernel Team | ||
| linux (Ubuntu) |
Medium
|
Canonical Kernel Team |
Bug Description
== Comment: #0 - HARIHARAN T. SUNDARESH REDDY <email address hidden> - 2018-10-03 01:29:59 ==
Description : Hotplug is supported only for 64 dimms and the same is controlled by the sysfs parameter
# cat /sys/module/
64
However qemu supports 256 slots max. In my opinion max_mem_regions must be >= 256.
log :
63
Device attached successfully <--- 63 times devices attached successfully, later all attach failed.
64
error: Failed to attach device from hp_mem.xml
error: internal error: unable to execute QEMU command 'device_add': a used vhost backend has no free memory slots left
65
error: Failed to attach device from hp_mem.xml
error: internal error: unable to execute QEMU command 'device_add': a used vhost backend has no free memory slots left
66
error: Failed to attach device from hp_mem.xml
error: internal error: unable to execute QEMU command 'device_add': a used vhost backend has no free memory slots left
System Details :
uname -a
Linux ltc-fvttest 4.18.5-custom #1 SMP Wed Sep 26 12:11:51 CDT 2018 ppc64le ppc64le ppc64le GNU/Linux
root@ltc-
NAME="Ubuntu"
VERSION="18.10 (Cosmic Cuttlefish)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu Cosmic Cuttlefish (development branch)"
VERSION_ID="18.10"
HOME_URL="https:/
SUPPORT_URL="https:/
BUG_REPORT_URL="https:/
PRIVACY_
VERSION_
UBUNTU_
tags: | added: architecture-ppc64le bugnameltc-171919 severity-medium targetmilestone-inin--- |
Changed in ubuntu: | |
assignee: | nobody → Ubuntu on IBM Power Systems Bug Triage (ubuntu-power-triage) |
affects: | ubuntu → sysfsutils (Ubuntu) |
Changed in ubuntu-power-systems: | |
importance: | Undecided → Medium |
Andrew Cloke (andrew-cloke) wrote : | #1 |
affects: | sysfsutils (Ubuntu) → linux (Ubuntu) |
Changed in ubuntu-power-systems: | |
assignee: | nobody → Canonical Kernel Team (canonical-kernel-team) |
tags: | added: triage-g |
tags: | added: kernel-da-key |
Changed in linux (Ubuntu): | |
assignee: | Ubuntu on IBM Power Systems Bug Triage (ubuntu-power-triage) → Canonical Kernel Team (canonical-kernel-team) |
Changed in ubuntu-power-systems: | |
status: | New → Incomplete |
Changed in linux (Ubuntu): | |
status: | New → Incomplete |
importance: | Undecided → Medium |
------- Comment From <email address hidden> 2019-01-04 00:33 EDT-------
(In reply to comment #5)
> I believe the "uname -a" output shows that this issue was observed with a
> custom, i.e. a non-Ubuntu, kernel. Could you confirm that the issue is seen
> with an unmodified Ubuntu kernel?
>
> Also, do you have a proposed (kernel?) patch that resolves this issue?
>
> Thanks.
Hariharan
Any updates on this? Do you observe this issue on released kernel version of ubuntu?
bugproxy (bugproxy) wrote : | #3 |
------- Comment From <email address hidden> 2019-01-17 06:33 EDT-------
On the standard kernel also I am seeing the same values.
root@ltc-test:~# cat /sys/module/
64
root@ltc-test:~# cat /etc/os-release
NAME="Ubuntu"
VERSION="18.10 (Cosmic Cuttlefish)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.10"
VERSION_ID="18.10"
HOME_URL="https:/
SUPPORT_URL="https:/
BUG_REPORT_URL="https:/
PRIVACY_
VERSION_
UBUNTU_
root@ltc-
Linux ltc-test 4.18.0-11-generic #12-Ubuntu SMP Tue Oct 23 19:20:58 UTC 2018 ppc64le ppc64le
ppc64le GNU/Linux
Changed in ubuntu-power-systems: | |
status: | Incomplete → Triaged |
Can you test that this works with the module parameter set to a different value?
vhost.max_
Thanks.
Cascardo.
Changed in ubuntu-power-systems: | |
status: | Triaged → Incomplete |
bugproxy (bugproxy) wrote : | #5 |
------- Comment From <email address hidden> 2019-02-18 05:01 EDT-------
I was able to set the max_mem_regions to 512 using module parameter for vhost
modprobe -r vhost
modprobe vhost max_mem_regions=512
cat /sys/module/
512
I was able to set the max_mem_regions to maximum value.
Manoj Iyer (manjo) wrote : | #6 |
IBM, Could you please advise us if this bug can now be closed on our end? Also please provide information on kernel version used?
bugproxy (bugproxy) wrote : | #7 |
------- Comment From <email address hidden> 2019-02-19 00:16 EDT-------
yes we can close this bug
Thanks
Andrew Cloke (andrew-cloke) wrote : | #8 |
Thanks. Closing bug out.
Changed in ubuntu-power-systems: | |
status: | Incomplete → Won't Fix |
Changed in linux (Ubuntu): | |
status: | Incomplete → Invalid |
bugproxy (bugproxy) wrote : | #9 |
------- Comment From <email address hidden> 2019-02-20 00:30 EDT-------
The method mentioned in comment 9 is to use module parameter to change the value of max_mem_regions. Which is working and hotplug happens upto the 256 times.
How ever the same solution may not be useful in the production environment. To set the module parameter we have remove the module first, to do so we have to bring down all running guests.
I would be always goo to set the max_mem_regions to set to max value say 512. That will enable hotplug on production environment with out bring down and modules or guests.
bugproxy (bugproxy) wrote : | #10 |
------- Comment From <email address hidden> 2019-03-15 02:50 EDT-------
(In reply to comment #14)
> The method mentioned in comment 9 is to use module parameter to change the
> value of max_mem_regions. Which is working and hotplug happens upto the 256
> times.
>
> How ever the same solution may not be useful in the production environment.
> To set the module parameter we have remove the module first, to do so we
> have to bring down all running guests.
>
> I would be always goo to set the max_mem_regions to set to max value say
> 512. That will enable hotplug on production environment with out bring down
> and modules or guests.
Hello Canonical,
Any comments on this?
Andrew Cloke (andrew-cloke) wrote : | #11 |
> Hello Canonical,
>
> Any comments on this?
I'm afraid this bug has been closed out. If there are further questions, could you raise a new bug?
Many thanks.
I believe the "uname -a" output shows that this issue was observed with a custom, i.e. a non-Ubuntu, kernel. Could you confirm that the issue is seen with an unmodified Ubuntu kernel?
Also, do you have a proposed (kernel?) patch that resolves this issue?
Thanks.