Comment 32 for bug 1734856

Revision history for this message
bugproxy (bugproxy) wrote : Comment bridged from LTC Bugzilla

------- Comment From <email address hidden> 2017-12-29 22:02 EDT-------
I could able to recreate the issue with UBU17.10(artful) as host, running with xenial, zesty, and artful as guest VMs, where guest crashed if we try to add more than 36 disks with "boot order=1" (this is what the observation in my testing).

Applied artful-proposed slof fix on the host and started the xenial, zesty and artful guests. With the proposed artful slof fix, I could able to see all 3 guests coming up rather booted successfully and showed all 49 disks added.

Results:
----------------------------------------------------------------------
Installed slof version:
root@lep8d:~# apt search slof
Sorting... Done
Full Text Search... Done
qemu-slof/artful,now 20170724+dfsg-1 all [installed,automatic]
Slimline Open Firmware -- QEMU PowerPC version
---------------------------------------------------------------------
Fixed slof version:
root@lep8d:~# apt-get install qemu-slof/artful-proposed
Reading package lists... Done
Building dependency tree
Reading state information... Done
Selected version '20170724+dfsg-1ubuntu0.1' (Ubuntu:17.10/artful-proposed [all]) for 'qemu-slof'
Suggested packages:
qemu
The following packages will be upgraded:
qemu-slof
1 upgraded, 0 newly installed, 0 to remove and 37 not upgraded.
Need to get 172 kB of archives.
After this operation, 1,024 B of additional disk space will be used.
Get:1 http://ports.ubuntu.com/ubuntu-ports artful-proposed/main ppc64el qemu-slof all 20170724+dfsg-1ubuntu0.1 [172 kB]
Fetched 172 kB in 0s (258 kB/s)
(Reading database ... 103312 files and directories currently installed.)
Preparing to unpack .../qemu-slof_20170724+dfsg-1ubuntu0.1_all.deb ...
Unpacking qemu-slof (20170724+dfsg-1ubuntu0.1) over (20170724+dfsg-1) ...
Setting up qemu-slof (20170724+dfsg-1ubuntu0.1) ...
---------------------------------------------------------------------------
root@lep8d:~# service libvirtd restart

root@lep8d:~# virsh list --all
Id Name State
----------------------------------------------------
16 kal-artful_vm1 running
17 kal-xenial_vm1 running
18 kal-zesty_vm1 running

Inside each guest verified that we can see 49 disks:
root@lep8d:~# virsh console kal-artful_vm1
root@ubuntu:~# lsblk | grep disk | wc -l
49

root@lep8d:~# virsh console kal-xenial_vm1
root@ubuntu:~# lsblk | grep disk | wc -l
49

root@lep8d:~# virsh console kal-zesty_vm1
root@ubuntu:~# lsblk | grep disk | wc -l
49

In summary, I have validated on an artful host with 3 different guests, its working fine. Copying Srikanth to take it forward if anything else needs to be validated.