Can't boot VM with more than 16 disks (slof buffer issue)
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
| SLOF - Slimline Open Firmware |
New
|
Unknown
|
||
| The Ubuntu-power-systems project |
Critical
|
Canonical Server Team | ||
| slof (Ubuntu) |
Critical
|
David Britton | ||
| Xenial |
Undecided
|
Unassigned | ||
| Zesty |
Undecided
|
Unassigned | ||
| Artful |
Undecided
|
Unassigned | ||
| Bionic |
Critical
|
David Britton |
Bug Description
[Impact]
* Booting a KVM guest with many disks considered as potential boot device
fails on ppc64le
* In detail this was an overflow, so now the processing of devices is
changed to use dynamic allocation which works with higher numbers of
devices.
[Test Case]
* Comment #12 has the full final testcase, not writing all that up
here again.
[Regression Potential]
* It is a change of disk processing in the slof loader for ppc64el
- that means on one hand only ppc64el will be affected by an issue
- OTOH there might be a disk combination not part of my or upstreams or
IBMs testing that might now fail with the new code (unlikely)
- Given that the change is upstream and was provided by IBM which I
consider the authority on that code I think it is safe to be
considered.
[Other Info]
* n/a
----
== Comment: #0 - RAHUL CHANDRAKAR <email address hidden> - 2017-11-28 03:40:37 ==
---Problem Description---
Can't boot VM with more than 16 disks.
It is an issue with qemu/SLOF (Bug: https:/
We need this fix in Ubuntu 16.04 and later releases.
Machine Type = 8348-21C (P8 Habanero)
---Steps to Reproduce---
Steps to recreate:
1. Create a VM
2. Attach 50 disks
3. Shutdown from OS
4. Start again and let it boot
---uname output---
Linux neo160.
---Debugger---
A debugger is not configured
Patch posted and awaiting response...
tags: | added: architecture-ppc64le bugnameltc-161776 severity-critical targetmilestone-inin--- |
Changed in ubuntu: | |
assignee: | nobody → Ubuntu on IBM Power Systems Bug Triage (ubuntu-power-triage) |
affects: | ubuntu → qemu (Ubuntu) |
Changed in ubuntu-power-systems: | |
importance: | Undecided → Critical |
assignee: | nobody → Canonical Server Team (canonical-server) |
Changed in qemu (Ubuntu): | |
assignee: | Ubuntu on IBM Power Systems Bug Triage (ubuntu-power-triage) → David Britton (davidpbritton) |
importance: | Undecided → Critical |
tags: | added: triage-g |
Christian Ehrhardt (paelzer) wrote : | #2 |
Thanks for the bug, I see the patch is under active discussion atm.
Lets take a look to integrate once that discussion concludes.
Since it is packaged in slof I assigned that as the target and opened up per release tasks.
affects: | qemu (Ubuntu) → slof (Ubuntu) |
Christian Ehrhardt (paelzer) wrote : | #3 |
Also linked up the GH issue and the SLOF project in general.
We should get auto-updates once resolved, but to be sure @nikunj please feel free to ping here once the discussion on the patch concludes and is committed upstream.
Changed in slof: | |
status: | Unknown → New |
Andrew Cloke (andrew-cloke) wrote : | #4 |
Marking as "incomplete" until patch lands upstream.
Changed in ubuntu-power-systems: | |
status: | New → Incomplete |
bugproxy (bugproxy) wrote : | #5 |
------- Comment From <email address hidden> 2017-12-13 06:03 EDT-------
Patches commited:
https:/
https:/
Should reflect in qemu repository in a day
Christian Ehrhardt (paelzer) wrote : | #6 |
Thanks for the ping Nikunj!
Changed in slof (Ubuntu Bionic): | |
status: | New → Triaged |
tags: | added: slof-18.04 |
Christian Ehrhardt (paelzer) wrote : | #7 |
@Nikunj - do you happen to know if Alexey is planning an official release soon that would include this fix?
Note: this is the first Delta to take, but I look into it as it is critical.
We want to get back to have upstream release a version, debian pick it up and we become a sync again in bionic.
Christian Ehrhardt (paelzer) wrote : | #8 |
Hi,
I don't want to wait too long, so while waiting for an answer I made a ppa available that should be tested to confirm this fix is good (for bionic).
Please test on Bionic from ppa: https:/
I'll mark the bug incomplete to clearly reflect I wait on an answer.
Having that verified unblocks me from uploading it and that will unblock the SRUs considerations according to the policy.
Changed in slof (Ubuntu Bionic): | |
status: | Triaged → Incomplete |
Christian Ehrhardt (paelzer) wrote : | #9 |
I tried to verify the case and fix myself which is a prereq for good steps to reproduce for the SRU anyway.
I came up with test steps based on what was initially reported.
$ uvt-simplestrea
$ uvt-kvm create --password=ubuntu cpaelzer-bionic release=bionic arch=ppc64el label=daily
$ for i in {1..20}; do
h=$(printf "\x$(printf %x $((98+$i)))");
echo "<disk type='file' device=
qemu-img create -f qcow2 /var/lib/
virsh attach-device artful-testguest disk.xml;
done
$ virsh console <guest>
# in guest then
$ sudo reboot
That works for me with overall 22 disks.
I see a bunch of those:
virtioblk_
virtioblk_
But it boots fine still and all disks are attached just fine.
If instead I add all disks to the device statically before starting it it is the same.
Do I just need more devices - any options to be set?
Please also help to provide working steps to reproduce (that if possible does not contain having 20+ real disks reachable).
Christian Ehrhardt (paelzer) wrote : | #10 |
Tried the base 2 + 2x20 = 42 disks.
Still working for me as-is.
To summarize what I wait on:
1. better steps to reproduce - if possible slight modification to my suggested workflow and even more so if possible without needing double digit amount of real disks
2. please test/verify the ppa linked in comment #8 in your environment
Nikunj A Dadhania (nikunjad) wrote : | #11 |
@paelzer you need to have boot order in each of these disk
<boot order='$i'/>
Christian Ehrhardt (paelzer) wrote : | #12 |
It was still working with 20 disks and boot index, but 48 made it.
Thanks Nikunj for the bood index hint.
Overall testcase:
# Prep a guest
$ uvt-simplestrea
$ uvt-kvm create --password=ubuntu bionic-testguest release=bionic arch=ppc64el label=daily
Use `virsh edit bionic-testguest` to do the following three steps:
1. Remove in the os section
<boot dev='hd'/>
2. Add this on your former primary disk:
<boot order='1'/>
3. And then add the xml content generated with:
$ echo "" > disk.xml;
$ for i in {1..24}; do
h=$(printf "\x$(printf %x $((98+$i)))")
echo "<disk type='file' device=
qemu-img create -f qcow2 /var/lib/
echo "<disk type='file' device=
qemu-img create -f qcow2 /var/lib/
done
That makes up for 48 extra disks and the two base devices.
Then run it into console to trigger the bug:
$ virsh start --console artful-testguest
( 300 ) Data Storage Exception [ 1dc4a018 ]
R0 .. R7 R8 .. R15 R16 .. R23 R24 .. R31
000000001dbe2544 000000001e462008 0000000000000000 000000001dc04c00
000000001e665ff0 000000001dc5e248 0000000000000000 000000001dc09268
000000001dc0ed00 000000001dc4a010 0000000000000000 0000000000000003
0000000000000054 000000001e6663f5 0000000000000000 000000000000f001
000000001dc5e1c0 000000000000005b 000000001dc09438 ffffffffffffffff
000000001dc4a018 0000000000000000 0000000000008000 000000001e462010
000000001dc4a018 0000000000000000 000000000000f003 000000001dbe46d4
000000001e462010 0000000000000000 0000000000000006 4038303030303030
CR / XER LR / CTR SRR0 / SRR1 DAR / DSISR
80000404 000000001dbe4ec0 000000001dbe4700 4038303030303030
0000000020000000 000000001dbe46d4 8000000000001000 40000000
That error was reproducible on several restarts.
Then I installed the PPA and rerun it which now worked to be a successful boot again.
Prepping SRU Template with that info.
description: | updated |
Nikunj A Dadhania (nikunjad) wrote : | #13 |
@paelzer Had a discussion with Alexey, he has created a release label
https:/
Waiting for the mirror to happen on https:/
Once that happens, he will send a slof update patch to the qemu mailing.
Christian Ehrhardt (paelzer) wrote : | #14 |
Thanks for the info Nikunj.
That means we can likely soon pick it up as a sync from Debian again.
But for now can fix it by picking your changes.
Launchpad Janitor (janitor) wrote : | #15 |
Status changed to 'Confirmed' because the bug affects multiple users.
Changed in slof (Ubuntu Artful): | |
status: | New → Confirmed |
Changed in slof (Ubuntu Xenial): | |
status: | New → Confirmed |
Changed in slof (Ubuntu Zesty): | |
status: | New → Confirmed |
Christian Ehrhardt (paelzer) wrote : | #18 |
My former tests focussed on the testcase.
I deployed a fresh power8 system and ran some more tests on the proposed change but found no issues - that said going on.
That said I pushed an MP for review of the packaging changes.
@Nikunj / IBM - waiting on your check of the PPA as well if you can find the time before the final upload.
Nikunj A Dadhania (nikunjad) wrote : | #19 |
On my test environment with the extracted slof.bin from the DEB package, I do not see the issue.
./ppc64-
qemu-system-ppc64: -monitor pty: char device redirected to /dev/pts/1 (label compat_monitor0)
SLOF *******
QEMU Starting
Build Date = Dec 13 2017 13:46:58
FW Version = buildd@ release 20170724
Press "s" to enter Open Firmware.
Boots fine, I have asked Rahul to verify as well in his environment.
Christian Ehrhardt (paelzer) wrote : | #20 |
I saw the new upstream release, over the next time this will be picked up by Debian and we make it a sync again then.
For now I pick the fix as tested from the ppa into Bionic.
Once that migrated I'll look at the SRUs into X-A.
Changed in slof (Ubuntu Bionic): | |
status: | Incomplete → Fix Committed |
Launchpad Janitor (janitor) wrote : | #21 |
This bug was fixed in the package slof - 20170724+
---------------
slof (20170724+
* Fix boot with more than 16 disks (LP: #1734856)
- d/p/0001-
- d/p/0002-
-- Christian Ehrhardt <email address hidden> Wed, 13 Dec 2017 14:46:58 +0100
Changed in slof (Ubuntu Bionic): | |
status: | Fix Committed → Fix Released |
Christian Ehrhardt (paelzer) wrote : | #22 |
MPs for the packaging change in X/Z/A up for review and linked in the bug.
Changed in slof (Ubuntu Artful): | |
status: | Confirmed → In Progress |
Changed in slof (Ubuntu Zesty): | |
status: | Confirmed → In Progress |
Changed in slof (Ubuntu Xenial): | |
status: | Confirmed → In Progress |
Christian Ehrhardt (paelzer) wrote : | #23 |
Doing another bigger check set on Xenial and then opening up the MPs for review.
Christian Ehrhardt (paelzer) wrote : | #24 |
Tests good - MPs open for packaging review
Hello bugproxy, or anyone else affected,
Accepted slof into artful-proposed. The package will build now and be available at https:/
Please help us by testing this new package. See https:/
If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested and change the tag from verification-
Further information regarding the verification process can be found at https:/
Changed in slof (Ubuntu Artful): | |
status: | In Progress → Fix Committed |
tags: | added: verification-needed verification-needed-artful |
Changed in slof (Ubuntu Zesty): | |
status: | In Progress → Fix Committed |
tags: | added: verification-needed-zesty |
Brian Murray (brian-murray) wrote : | #26 |
Hello bugproxy, or anyone else affected,
Accepted slof into zesty-proposed. The package will build now and be available at https:/
Please help us by testing this new package. See https:/
If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested and change the tag from verification-
Further information regarding the verification process can be found at https:/
Changed in slof (Ubuntu Xenial): | |
status: | In Progress → Fix Committed |
tags: | added: verification-needed-xenial |
Brian Murray (brian-murray) wrote : | #27 |
Hello bugproxy, or anyone else affected,
Accepted slof into xenial-proposed. The package will build now and be available at https:/
Please help us by testing this new package. See https:/
If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested and change the tag from verification-
Further information regarding the verification process can be found at https:/
tags: | removed: verification-needed-xenial |
Kalpana S Shetty (kalshett) wrote : | #28 |
Artful validation:
-------
I could able to recreate the issue;
Initially added upto 32 boot order disks apart from primary disk(boot order 1), and not able to recreate it. However with one more disks added with boot dev, could able to create the reported issue.
ubuntu login: root
Password:
Welcome to Ubuntu 17.10 (GNU/Linux 4.13.0-16-generic ppc64le)
* Documentation: https:/
* Management: https:/
* Support: https:/
46 packages can be updated.
27 updates are security updates.
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
root@ubuntu:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 1M 0 disk
vdb 252:16 0 1M 0 disk
vdc 252:32 0 1M 0 disk
vdd 252:48 0 1M 0 disk
vde 252:64 0 1M 0 disk
vdf 252:80 0 1M 0 disk
vdg 252:96 0 1M 0 disk
vdh 252:112 0 1M 0 disk
vdi 252:128 0 1M 0 disk
vdj 252:144 0 1M 0 disk
vdk 252:160 0 1M 0 disk
vdl 252:176 0 1M 0 disk
vdm 252:192 0 1M 0 disk
vdn 252:208 0 1M 0 disk
vdo 252:224 0 1M 0 disk
vdp 252:240 0 1M 0 disk
vdq 252:256 0 1M 0 disk
vdr 252:272 0 1M 0 disk
vds 252:288 0 1M 0 disk
vdt 252:304 0 1M 0 disk
vdu 252:320 0 1M 0 disk
vdv 252:336 0 1M 0 disk
vdw 252:352 0 1M 0 disk
vdx 252:368 0 1M 0 disk
vdy 252:384 0 1M 0 disk
vdz 252:400 0 1M 0 disk
vdaa 252:416 0 50G 0 disk
├─vdaa1 252:417 0 7M 0 part
└─vdaa2 252:418 0 50G 0 part /
vdab 252:432 0 1M 0 disk
vdac 252:448 0 1M 0 disk
vdad 252:464 0 1M 0 disk
vdae 252:480 0 1M 0 disk
vdaf 252:496 0 1M 0 disk
vdag 252:512 0 1M 0 disk
Added one more addition one, 34th boot order disk, could able to recreate it.
[root@lep8a artful]# vi disk.xml
[root@lep8a artful]# virsh edit kal-artful_vm1
Domain kal-artful_vm1 XML configuration edited.
[root@lep8a artful]# virsh destroy kal-artful_vm1
Domain kal-artful_vm1 destroyed
[root@lep8a artful]# virsh start --console kal-artful_vm1
Domain kal-artful_vm1 started
Connected to domain kal-artful_vm1
Escape character is ^]
SLOF *******
QEMU Starting
Build Date = May 19 2017 07:43:48
FW Version = mockbuild@ release 20170303
Press "s" to enter Open Firmware.
Populating /vdevice methods
Populating /vdevice/
SCSI: Looking for devices
Populating /vdevice/
Populating /vdevice/
Populating /pci@8000000200
...
Kalpana S Shetty (kalshett) wrote : | #29 |
Applied artful proposed fixes and could not able to see the reported issue fixed.
Help needed to apply artful-proposed fix.
------- Comment From <email address hidden> 2017-12-25 15:10 EDT-------
I'm unable to get this fix applied to artful guest VM thought slof packages are updates.
-------
I could able to recreate the issue; Initially added upto 32 boot order disks apart from primary disk(boot order 1), and not able to recreate it.
Applied artful proposed fixes and could able to boot the guest successfully.
------- Comment From <email address hidden> 2017-12-25 15:13 EDT-------
(In reply to comment #25)
> I'm unable to get this fix applied to artful guest VM thought slof packages
> are updates.
> Artful validation:
> -------
> -------
> I could able to recreate the issue; Initially added upto 32 boot order disks
> apart from primary disk(boot order 1), and not able to recreate it.
>
> ubuntu login: root
> Password:
> Welcome to Ubuntu 17.10 (GNU/Linux 4.13.0-16-generic ppc64le)
>
> * Documentation: https:/
> * Management: https:/
> * Support: https:/
>
>
> 46 packages can be updated.
> 27 updates are security updates.
>
> The programs included with the Ubuntu system are free software;
> the exact distribution terms for each program are described in the
> individual files in /usr/share/
>
> Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
> applicable law.
>
> root@ubuntu:~# lsblk
> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
> sr0 11:0 1 1024M 0 rom
> vda 252:0 0 1M 0 disk
> vdb 252:16 0 1M 0 disk
> vdc 252:32 0 1M 0 disk
> vdd 252:48 0 1M 0 disk
> vde 252:64 0 1M 0 disk
> vdf 252:80 0 1M 0 disk
> vdg 252:96 0 1M 0 disk
> vdh 252:112 0 1M 0 disk
> vdi 252:128 0 1M 0 disk
> vdj 252:144 0 1M 0 disk
> vdk 252:160 0 1M 0 disk
> vdl 252:176 0 1M 0 disk
> vdm 252:192 0 1M 0 disk
> vdn 252:208 0 1M 0 disk
> vdo 252:224 0 1M 0 disk
> vdp 252:240 0 1M 0 disk
> vdq 252:256 0 1M 0 disk
> vdr 252:272 0 1M 0 disk
> vds 252:288 0 1M 0 disk
> vdt 252:304 0 1M 0 disk
> vdu 252:320 0 1M 0 disk
> vdv 252:336 0 1M 0 disk
> vdw 252:352 0 1M 0 disk
> vdx 252:368 0 1M 0 disk
> vdy 252:384 0 1M 0 disk
> vdz 252:400 0 1M 0 disk
> vdaa 252:416 0 50G 0 disk
> ??vdaa1 252:417 0 7M 0 part
> ??vdaa2 252:418 0 50G 0 part /
> vdab 252:432 0 1M 0 disk
> vdac 252:448 0 1M 0 disk
> vdad 252:464 0 1M 0 disk
> vdae 252:480 0 1M 0 disk
> vdaf 252:496 0 1M 0 disk
> vdag 252:512 0 1M 0 disk
>
> Added one more addition one, 34th boot order disk, could able to recreate it.
>
> [root@lep8a artful]# vi disk.xml
>
> [root@lep8a artful]# virsh edit kal-artful_vm1
> Domain kal-artful_vm1 XML configuration edited.
>
> [root@lep8a artful]# virsh destroy kal-ar...
bugproxy (bugproxy) wrote : | #31 |
------- Comment From <email address hidden> 2017-12-25 22:03 EDT-------
(In reply to comment #27)
> -------
> -------
> I could able to recreate the issue;
>
> Initially added upto 32 boot order disks apart from primary disk(boot order
> 1), and not able to recreate it. However with one more disks added with boot
> dev, could able to create the reported issue.
>
>
>
>
>
> Applied artful proposed fixes and could not able to see the reported issue
> fixed.
> Help needed to apply artful-proposed fix.
>
> Applied artful proposed fixes and could not able to see the reported issue
> fixed.
> Help needed to apply artful-proposed fix.
>
> Applied artful proposed fixes and could not able to see the reported issue
> fixed.
> Help needed to apply artful-proposed fix.
Please ignore my comment #26 where I mentioned it could able to use the artful proposed fix. I'm facing issues in applying the fix. I need help to apply artful proposed fix on my VM to make sure issue is not recreatable. In my last comments update, I messed up in clearly mentioned about the issue.
Here is what the observation:
1. I could able to recreate the reported issue after I added 34th disk with boot dev option.
2. While I'm not able to apply the artful proposed fix, I need help in applying the artful-proposed fix.
bugproxy (bugproxy) wrote : | #32 |
------- Comment From <email address hidden> 2017-12-29 22:02 EDT-------
I could able to recreate the issue with UBU17.10(artful) as host, running with xenial, zesty, and artful as guest VMs, where guest crashed if we try to add more than 36 disks with "boot order=1" (this is what the observation in my testing).
Applied artful-proposed slof fix on the host and started the xenial, zesty and artful guests. With the proposed artful slof fix, I could able to see all 3 guests coming up rather booted successfully and showed all 49 disks added.
Results:
-------
Installed slof version:
root@lep8d:~# apt search slof
Sorting... Done
Full Text Search... Done
qemu-slof/
Slimline Open Firmware -- QEMU PowerPC version
-------
Fixed slof version:
root@lep8d:~# apt-get install qemu-slof/
Reading package lists... Done
Building dependency tree
Reading state information... Done
Selected version '20170724+
Suggested packages:
qemu
The following packages will be upgraded:
qemu-slof
1 upgraded, 0 newly installed, 0 to remove and 37 not upgraded.
Need to get 172 kB of archives.
After this operation, 1,024 B of additional disk space will be used.
Get:1 http://
Fetched 172 kB in 0s (258 kB/s)
(Reading database ... 103312 files and directories currently installed.)
Preparing to unpack .../qemu-
Unpacking qemu-slof (20170724+
Setting up qemu-slof (20170724+
-------
root@lep8d:~# service libvirtd restart
root@lep8d:~# virsh list --all
Id Name State
-------
16 kal-artful_vm1 running
17 kal-xenial_vm1 running
18 kal-zesty_vm1 running
Inside each guest verified that we can see 49 disks:
root@lep8d:~# virsh console kal-artful_vm1
root@ubuntu:~# lsblk | grep disk | wc -l
49
root@lep8d:~# virsh console kal-xenial_vm1
root@ubuntu:~# lsblk | grep disk | wc -l
49
root@lep8d:~# virsh console kal-zesty_vm1
root@ubuntu:~# lsblk | grep disk | wc -l
49
In summary, I have validated on an artful host with 3 different guests, its working fine. Copying Srikanth to take it forward if anything else needs to be validated.
bugproxy (bugproxy) wrote : | #33 |
------- Comment From <email address hidden> 2018-01-01 00:08 EDT-------
According to https:/
verification-
Ubuntu 17.10 (GNU/Linux 4.13.0-16-generic ppc64le)
qemu-slof/
bugproxy (bugproxy) wrote : | #34 |
------- Comment From <email address hidden> 2018-01-01 00:15 EDT-------
Since artful verification is done, is zesty verification really needed here? https:/
On bionic we would verify it once the alpha build comes out.
tags: |
added: verification-done-artful removed: verification-needed-artful |
Christian Ehrhardt (paelzer) wrote : | #35 |
Thanks for sorting out the Artful test already bssrikanth.
To be clear (the comments by kalshett are unclear to me at least. You need to test Xenial/Zesty/Artful HOSTS. The fix is in the slof package that is installed in the KVM host.
It actually doesn't even matter which level of guest you start, but you have to do it on the host release you verify.
From my POV verification-
I'll fix that up to have it as -needed again.
TL;DR: Yes it is needed for Xenial and Zesty as well.
Once Xenial and Zesty are done as well, then also set the global verification-done to be complete.
tags: | added: verification-needed-xenial |
bugproxy (bugproxy) wrote : | #36 |
------- Comment From <email address hidden> 2018-01-02 02:58 EDT-------
Thanks for the clarification. Yes verification being done using host levels: artful, xenial and zesty.
Christian Ehrhardt (paelzer) wrote : | #37 |
Ok, so are all three artful, xenial and zesty done already or are we waiting on xenial and zesty to complete?
bugproxy (bugproxy) wrote : | #38 |
------- Comment From <email address hidden> 2018-01-02 04:38 EDT-------
(In reply to comment #35)
> Ok, so are all three artful, xenial and zesty done already or are we waiting
> on xenial and zesty to complete?
verification on xenial and zesty in process.. Prudhvi will update here once done.
Launchpad Janitor (janitor) wrote : | #39 |
This bug was fixed in the package slof - 20170724+
---------------
slof (20170724+
* Fix boot with more than 16 disks (LP: #1734856)
- d/p/0001-
- d/p/0002-
-- Christian Ehrhardt <email address hidden> Mon, 18 Dec 2017 14:18:30 +0100
Changed in slof (Ubuntu Artful): | |
status: | Fix Committed → Fix Released |
The verification of the Stable Release Update for slof has completed successfully and the package has now been released to -updates. Subsequently, the Ubuntu Stable Release Updates Team is being unsubscribed and will not receive messages about this bug report. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.
------- Comment From <email address hidden> 2018-01-02 07:17 EDT-------
on xenial still i am seeing the issue.
Machine details:
1)kernel level
root@ltc84-
Linux ltc84-pkvm1 4.4.0-104-generic #127-Ubuntu SMP Mon Dec 11 12:16:48 UTC 2017 ppc64le ppc64le ppc64le GNU/Linux
2) slof package level:slof package level:
root@ltc84-
ii qemu-slof 20151103+
3)boot the guest with following xml:
<domain type='kvm' id='2'>
<name>prudhvi_
<uuid>d8466153-
<memory unit='KiB'
<currentMemory unit='KiB'
<vcpu placement=
<resource>
<partition>
</resource>
<os>
<type arch='ppc64le' machine=
</os>
<cpu>
<topology sockets='1' cores='4' threads='8'/>
</cpu>
<clock offset='utc'/>
<on_poweroff>
<on_reboot>
<on_crash>
<devices>
<emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/
<backingStore/>
<target dev='vda' bus='virtio'/>
<boot order='1'/>
<alias name='virtio-
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/
<backingStore/>
<target dev='vdc' bus='virtio'/>
<boot order='2'/>
<alias name='virtio-
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/
<backingStore/>
<target dev='vdd' bus='virtio'/>
<boot order='3'/>
<alias name='virtio-
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/
<backingStore/>
<target dev='vde' bus='virtio'/>
<boot order='4'/>
<alias name='virtio-
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/
<backingStore/>
<target dev='vdf' bus='virtio'/>
<boot order='5'/>
<alias name='virtio-
<address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/
<backingStore/>
<target dev='vdg' bus='virtio'/>
<boot order='6'/>
<alias name='virtio-
<address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/
<backingStore/>
<tar...
bugproxy (bugproxy) wrote : | #42 |
------- Comment From <email address hidden> 2018-01-02 07:24 EDT-------
(In reply to comment #38)
> on xenial still i am seeing the issue.
> Machine details:
> 1)kernel level
> root@ltc84-
> Linux ltc84-pkvm1 4.4.0-104-generic #127-Ubuntu SMP Mon Dec 11 12:16:48 UTC
> 2017 ppc64le ppc64le ppc64le GNU/Linux
>
xenial kernel version -> 4.4.x? Looks like not used HWE kernel.
> 2) slof package level:slof package level:
> root@ltc84-
> ii qemu-slof 20151103+
> all Slimline Open Firmware -- QEMU PowerPC version
>
> 3)boot the guest with following xml:
>
> <domain type='kvm' id='2'>
> <name>prudhvi_
> <uuid>d8466153-
> <memory unit='KiB'
> <currentMemory unit='KiB'
> <vcpu placement=
> <resource>
> <partition>
> </resource>
> <os>
> <type arch='ppc64le' machine=
> </os>
> <cpu>
> <topology sockets='1' cores='4' threads='8'/>
> </cpu>
> <clock offset='utc'/>
> <on_poweroff>
> <on_reboot>
> <on_crash>
> <devices>
> <emulator>
> <disk type='file' device='disk'>
> <driver name='qemu' type='qcow2'/>
> <source file='/
> <backingStore/>
> <target dev='vda' bus='virtio'/>
> <boot order='1'/>
> <alias name='virtio-
> <address type='pci' domain='0x0000' bus='0x00' slot='0x03'
> function='0x0'/>
> </disk>
> <disk type='file' device='disk'>
> <driver name='qemu' type='qcow2'/>
> <source file='/
> <backingStore/>
> <target dev='vdc' bus='virtio'/>
> <boot order='2'/>
> <alias name='virtio-
> <address type='pci' domain='0x0000' bus='0x00' slot='0x06'
> function='0x0'/>
> </disk>
> <disk type='file' device='disk'>
> <driver name='qemu' type='qcow2'/>
> <source file='/
> <backingStore/>
> <target dev='vdd' bus='virtio'/>
> <boot order='3'/>
> <alias name='virtio-
> <address type='pci' domain='0x0000' bus='0x00' slot='0x07'
> function='0x0'/>
> </disk>
> <disk type='file' device='disk'>
> <driver name='qemu' type='qcow2'/>
> <source file='/
> <backingStore/>
> <target dev='vde' bus='virtio'/>
> <boot order='4'/>
> <alias name='virtio-
> <address type='pci' domain='0x0000' bus='0x00' slot='0x08'
> function='0x0'/>
> </disk>
> <disk type='file' device='disk'>
> <driver name='qemu' type='qcow2'/>
> <source file='/
> <backingStore/>
> <target dev='vdf' bus='virtio'/>
> <boot order='5'/>
> <alias name='virtio-
> <address type='pci...
bugproxy (bugproxy) wrote : | #43 |
------- Comment From <email address hidden> 2018-01-02 07:42 EDT-------
Does this proposed xenial slof fix dependent on kernel used on host? or the proposed slof fix should work on both xenial kernel 4.4.x or 4.10.x?
Christian Ehrhardt (paelzer) wrote : | #44 |
It is mostly independent to the kernel, the fix by Nikunj just needs the new Slof to be on the host.
Please sync internally if the test is correct as this is the fix that was suggested by you and at least in my tests it worked. Since former verifications were a bit back and forth and this confusion now looks similar please make sure what the actual state of this is now on Xeniakl and Zesty.
bugproxy (bugproxy) wrote : | #45 |
------- Comment From <email address hidden> 2018-01-02 08:05 EDT-------
Prudhvi gave the system access and I have replied the proposed slof and restarted the guest. It seems working fine with xenial. Guest booted successfully showing all 49 disks.
root@ltc84-
Reading package lists... Done
Building dependency tree
Reading state information... Done
Selected version '20151103+
Suggested packages:
qemu
The following packages will be upgraded:
qemu-slof
1 upgraded, 0 newly installed, 0 to remove and 4 not upgraded.
Need to get 173 kB of archives.
After this operation, 1,024 B of additional disk space will be used.
Get:1 http://
Fetched 173 kB in 0s (267 kB/s)
(Reading database ... 84595 files and directories currently installed.)
Preparing to unpack .../qemu-
Unpacking qemu-slof (20151103+
Setting up qemu-slof (20151103+
bugproxy (bugproxy) wrote : | #46 |
------- Comment From <email address hidden> 2018-01-03 02:12 EDT-------
Tested, it is working fine in both xenial and zesty.
information of zesty:
-------
1)kernel level
root@ltc-
Linux ltc-fire4 4.10.0-42-generic #46-Ubuntu SMP Mon Dec 4 14:35:45 UTC 2017 ppc64le ppc64le ppc64le GNU/Linux
root@ltc-
2) slof package level:slof package level:
root@ltc-
ii qemu-slof 20161019+
3) Recreation steps:
i) Prep a guest
ii) Remove in the os section
<boot dev='hd'/>
iii) Add this on your former primary disk:
<boot order='1'/>
iv) And then add the xml content generated with:
echo "" > disk.xml;
for i in {1..24}; do
h=$(printf "\x$(printf %x $((98+$i)))")
echo "<disk type='file' device=
qemu-img create -f qcow2 /var/lib/
echo "<disk type='file' device=
qemu-img create -f qcow2 /var/lib/
done
v) virsh start guest-name --console
4)will be attaching guest xml and ouput of lsblk command output from the guest.
bugproxy (bugproxy) wrote : zesty guest xml | #47 |
------- Comment (attachment only) From <email address hidden> 2018-01-03 02:16 EDT-------
bugproxy (bugproxy) wrote : zesty lsblk output | #48 |
------- Comment (attachment only) From <email address hidden> 2018-01-03 02:16 EDT-------
Srikanth Aithal (srikanthaithal) wrote : | #49 |
verification done on
1. artful
2. xenial
3. zesty
results in comment 32, 45, 46
tags: |
added: verification-done verification-done-xenial verification-done-zesty removed: verification-needed verification-needed-xenial verification-needed-zesty |
Christian Ehrhardt (paelzer) wrote : | #50 |
Thank you Srikanth (and Team since I saw so many people update the bug).
This is already released for Artful, the SRU Team should soon release it for X/Z as well now.
Launchpad Janitor (janitor) wrote : | #51 |
This bug was fixed in the package slof - 20151103+
---------------
slof (20151103+
* Fix boot with more than 16 disks (LP: #1734856)
- d/p/0001-
- d/p/0002-
-- Christian Ehrhardt <email address hidden> Mon, 18 Dec 2017 14:12:32 +0100
Changed in slof (Ubuntu Xenial): | |
status: | Fix Committed → Fix Released |
Launchpad Janitor (janitor) wrote : | #52 |
This bug was fixed in the package slof - 20161019+
---------------
slof (20161019+
* Fix boot with more than 16 disks (LP: #1734856)
- d/p/0001-
- d/p/0002-
-- Christian Ehrhardt <email address hidden> Mon, 18 Dec 2017 14:15:11 +0100
Changed in slof (Ubuntu Zesty): | |
status: | Fix Committed → Fix Released |
Changed in ubuntu-power-systems: | |
status: | Incomplete → Fix Released |
tags: |
added: targetmilestone-inin16044 removed: targetmilestone-inin--- |
------- Comment From <email address hidden> 2017-11-29 01:56 EDT-------
Patch under discussion upstream
http:// patchwork. ozlabs. org/patch/ 842011/