2018-11-30 02:12:47 |
Vern Hart |
bug |
|
|
added bug |
2018-11-30 02:12:47 |
Vern Hart |
attachment added |
|
failed pxe boot console https://bugs.launchpad.net/bugs/1805920/+attachment/5217555/+files/Screenshot%20from%202018-11-16%2014-44-57.png |
|
2018-11-30 02:13:18 |
Vern Hart |
bug |
|
|
added subscriber Canonical Field Critical |
2018-11-30 03:21:23 |
Andres Rodriguez |
maas: status |
New |
Incomplete |
|
2018-11-30 14:29:46 |
Jason Hobbs |
bug |
|
|
added subscriber Jason Hobbs |
2018-12-01 03:18:29 |
Andres Rodriguez |
bug task added |
|
ipxe (Ubuntu) |
|
2018-12-01 03:22:45 |
Mike Pontillo |
maas: status |
Incomplete |
Opinion |
|
2018-12-01 03:22:47 |
Mike Pontillo |
maas: status |
Opinion |
Invalid |
|
2018-12-03 19:37:43 |
Vern Hart |
attachment added |
|
Cisco Bug CSCuu29425 - native (untagged) packets on RHEL7 seen as tagged-VLAN 0.pdf https://bugs.launchpad.net/maas/+bug/1805920/+attachment/5218635/+files/Cisco%20Bug%20CSCuu29425%20-%20native%20%28untagged%29%20packets%20on%20RHEL7%20seen%20as%20tagged-VLAN%200.pdf |
|
2018-12-05 19:29:05 |
Joshua Powers |
bug |
|
|
added subscriber Joshua Powers |
2018-12-05 23:52:49 |
Andres Rodriguez |
bug task added |
|
linux (Ubuntu) |
|
2018-12-06 00:00:05 |
Ubuntu Kernel Bot |
linux (Ubuntu): status |
New |
Incomplete |
|
2018-12-06 00:10:00 |
Mike Pontillo |
linux (Ubuntu): status |
Incomplete |
Confirmed |
|
2018-12-06 00:14:25 |
Mike Pontillo |
ipxe (Ubuntu): status |
New |
Confirmed |
|
2018-12-06 10:41:53 |
Christian Ehrhardt |
bug |
|
|
added subscriber Christian Ehrhardt |
2018-12-11 10:59:23 |
Launchpad Janitor |
ipxe (Ubuntu): status |
Confirmed |
Fix Released |
|
2018-12-11 11:47:33 |
Christian Ehrhardt |
bug task added |
|
ipxe-qemu-256k-compat (Ubuntu) |
|
2018-12-11 11:47:45 |
Christian Ehrhardt |
linux (Ubuntu): status |
Confirmed |
Invalid |
|
2018-12-11 11:47:59 |
Christian Ehrhardt |
nominated for series |
|
Ubuntu Cosmic |
|
2018-12-11 11:47:59 |
Christian Ehrhardt |
bug task added |
|
linux (Ubuntu Cosmic) |
|
2018-12-11 11:47:59 |
Christian Ehrhardt |
bug task added |
|
ipxe (Ubuntu Cosmic) |
|
2018-12-11 11:47:59 |
Christian Ehrhardt |
bug task added |
|
ipxe-qemu-256k-compat (Ubuntu Cosmic) |
|
2018-12-11 11:47:59 |
Christian Ehrhardt |
nominated for series |
|
Ubuntu Xenial |
|
2018-12-11 11:47:59 |
Christian Ehrhardt |
bug task added |
|
linux (Ubuntu Xenial) |
|
2018-12-11 11:47:59 |
Christian Ehrhardt |
bug task added |
|
ipxe (Ubuntu Xenial) |
|
2018-12-11 11:47:59 |
Christian Ehrhardt |
bug task added |
|
ipxe-qemu-256k-compat (Ubuntu Xenial) |
|
2018-12-11 11:47:59 |
Christian Ehrhardt |
nominated for series |
|
Ubuntu Disco |
|
2018-12-11 11:47:59 |
Christian Ehrhardt |
bug task added |
|
linux (Ubuntu Disco) |
|
2018-12-11 11:47:59 |
Christian Ehrhardt |
bug task added |
|
ipxe (Ubuntu Disco) |
|
2018-12-11 11:47:59 |
Christian Ehrhardt |
bug task added |
|
ipxe-qemu-256k-compat (Ubuntu Disco) |
|
2018-12-11 11:47:59 |
Christian Ehrhardt |
nominated for series |
|
Ubuntu Trusty |
|
2018-12-11 11:47:59 |
Christian Ehrhardt |
bug task added |
|
linux (Ubuntu Trusty) |
|
2018-12-11 11:47:59 |
Christian Ehrhardt |
bug task added |
|
ipxe (Ubuntu Trusty) |
|
2018-12-11 11:47:59 |
Christian Ehrhardt |
bug task added |
|
ipxe-qemu-256k-compat (Ubuntu Trusty) |
|
2018-12-11 11:47:59 |
Christian Ehrhardt |
nominated for series |
|
Ubuntu Bionic |
|
2018-12-11 11:47:59 |
Christian Ehrhardt |
bug task added |
|
linux (Ubuntu Bionic) |
|
2018-12-11 11:47:59 |
Christian Ehrhardt |
bug task added |
|
ipxe (Ubuntu Bionic) |
|
2018-12-11 11:47:59 |
Christian Ehrhardt |
bug task added |
|
ipxe-qemu-256k-compat (Ubuntu Bionic) |
|
2018-12-11 11:48:13 |
Christian Ehrhardt |
bug task deleted |
linux (Ubuntu Trusty) |
|
|
2018-12-11 11:48:19 |
Christian Ehrhardt |
bug task deleted |
linux (Ubuntu Xenial) |
|
|
2018-12-11 11:48:25 |
Christian Ehrhardt |
bug task deleted |
linux (Ubuntu Bionic) |
|
|
2018-12-11 11:48:30 |
Christian Ehrhardt |
bug task deleted |
linux (Ubuntu Cosmic) |
|
|
2018-12-11 11:48:35 |
Christian Ehrhardt |
bug task deleted |
linux (Ubuntu Disco) |
|
|
2018-12-11 11:48:46 |
Christian Ehrhardt |
ipxe-qemu-256k-compat (Ubuntu Trusty): status |
New |
Invalid |
|
2018-12-11 11:48:56 |
Christian Ehrhardt |
ipxe-qemu-256k-compat (Ubuntu Xenial): status |
New |
Invalid |
|
2018-12-11 11:49:05 |
Christian Ehrhardt |
bug task deleted |
ipxe-qemu-256k-compat (Ubuntu Trusty) |
|
|
2018-12-11 11:49:10 |
Christian Ehrhardt |
bug task deleted |
ipxe-qemu-256k-compat (Ubuntu Xenial) |
|
|
2018-12-11 11:49:15 |
Christian Ehrhardt |
bug task deleted |
ipxe-qemu-256k-compat (Ubuntu Bionic) |
|
|
2018-12-11 11:49:19 |
Christian Ehrhardt |
bug task deleted |
ipxe-qemu-256k-compat (Ubuntu Cosmic) |
|
|
2018-12-11 11:49:24 |
Christian Ehrhardt |
bug task deleted |
ipxe-qemu-256k-compat (Ubuntu Disco) |
|
|
2018-12-11 11:49:30 |
Christian Ehrhardt |
ipxe-qemu-256k-compat (Ubuntu): status |
New |
Won't Fix |
|
2018-12-11 11:49:34 |
Christian Ehrhardt |
ipxe (Ubuntu Trusty): status |
New |
Won't Fix |
|
2018-12-11 11:49:36 |
Christian Ehrhardt |
ipxe (Ubuntu Xenial): status |
New |
Won't Fix |
|
2018-12-11 11:49:41 |
Christian Ehrhardt |
ipxe-qemu-256k-compat (Ubuntu): status |
Won't Fix |
Invalid |
|
2018-12-11 11:49:44 |
Christian Ehrhardt |
ipxe (Ubuntu Bionic): status |
New |
Triaged |
|
2018-12-11 11:49:46 |
Christian Ehrhardt |
ipxe (Ubuntu Cosmic): status |
New |
Triaged |
|
2018-12-11 15:03:54 |
Launchpad Janitor |
merge proposal linked |
|
https://code.launchpad.net/~paelzer/ubuntu/+source/ipxe/+git/ipxe/+merge/360678 |
|
2018-12-11 15:04:15 |
Launchpad Janitor |
merge proposal linked |
|
https://code.launchpad.net/~paelzer/ubuntu/+source/ipxe/+git/ipxe/+merge/360679 |
|
2018-12-11 15:13:20 |
Christian Ehrhardt |
description |
I have three MAAS rack/region nodes which are blades in a Cisco UCS chassis. This is an FCE deployment where MAAS has two DHCP servers, infra1 is the primary and infra3 is the secondary. The pod VMs on infra1 and infra3 PXE boot fine but the pod VMs on infra2 fail to PXE boot. If I reconfigure the subnet to provide DHCP on infra2 (either as primary or secondary) then the pod VMs on infra2 will PXE boot but the pod VMs on the demoted infra node (that no longer serves DHCP) now fail to PXE boot.
While commissioning a pod VM on infra2 I captured network traffic with tcpdump on the vnet interface.
Here is the dump when the PXE boot fails (no dhcp server on infra2):
https://pastebin.canonical.com/p/THW2gTSv4S/
Here is the dump when PXE boot succeeds (when infra2 is serving dhcp):
https://pastebin.canonical.com/p/HH3XvZtTGG/
The only difference I can see is that in the unsuccessful scenario, the reply is an 802.1q packet -- it's got a vlan tag for vlan 0. Normally vlan 0 traffic is passed as if it is not tagged and indeed, I can ping between the blades with no problem. Outgoing packets are untagged but incoming packets are tagged vlan 0 -- but the ping works. It seems vlan 0 is used as a part of 802.1p to set priority of packets. This is separate from vlan, it just happens to use that ethertype to do the priority tagging.
Someone confirmed to me that, in the iPXE source, it drops all packets if they are vlan tagged.
The customer is unable to figure out why the packets between blades is getting vlan tagged so we either need to figure out how to allow iPXE to accept vlan 0 or the customer will need to use different equipment for the MAAS nodes.
I found a conversation on the ipxe-devel mailing list that suggested a commit was submitted and signed off but that was from 2016 so I'm not sure what became of it. Notable messages in the thread:
http://lists.ipxe.org/pipermail/ipxe-devel/2016-April/004916.html
http://lists.ipxe.org/pipermail/ipxe-devel/2016-July/005099.html
Would it be possible to install a local patch as part of the FCE deployment? I suspect the patch(es) mentioned in the above thread would require some modification to apply properly. |
[Impact]
* VLAN 0 is special (for QoS actually, not a real VLAN)
* Some components in the stack accidentially strip it, so does ipxe in
this case.
* Fix by porting a fix that is carried by other distributions as upstrem
didn't follow the sugegstion but it is needed for the use case affected
by the bug here (Thanks Andres)
[Test Case]
* TODO
[Regression Potential]
* The onyl refernce to VLAN tags on iPXE boto that we found was on iBFT
boot for SCSI, we tested that in comment #34 and it still worked fine.
* We didn't see such cases on reivew, but there might be use cases that
made some unexpected use of the headers which are now stripped. But
that seems wrong.
[Other Info]
* n/a
---
I have three MAAS rack/region nodes which are blades in a Cisco UCS chassis. This is an FCE deployment where MAAS has two DHCP servers, infra1 is the primary and infra3 is the secondary. The pod VMs on infra1 and infra3 PXE boot fine but the pod VMs on infra2 fail to PXE boot. If I reconfigure the subnet to provide DHCP on infra2 (either as primary or secondary) then the pod VMs on infra2 will PXE boot but the pod VMs on the demoted infra node (that no longer serves DHCP) now fail to PXE boot.
While commissioning a pod VM on infra2 I captured network traffic with tcpdump on the vnet interface.
Here is the dump when the PXE boot fails (no dhcp server on infra2):
https://pastebin.canonical.com/p/THW2gTSv4S/
Here is the dump when PXE boot succeeds (when infra2 is serving dhcp):
https://pastebin.canonical.com/p/HH3XvZtTGG/
The only difference I can see is that in the unsuccessful scenario, the reply is an 802.1q packet -- it's got a vlan tag for vlan 0. Normally vlan 0 traffic is passed as if it is not tagged and indeed, I can ping between the blades with no problem. Outgoing packets are untagged but incoming packets are tagged vlan 0 -- but the ping works. It seems vlan 0 is used as a part of 802.1p to set priority of packets. This is separate from vlan, it just happens to use that ethertype to do the priority tagging.
Someone confirmed to me that, in the iPXE source, it drops all packets if they are vlan tagged.
The customer is unable to figure out why the packets between blades is getting vlan tagged so we either need to figure out how to allow iPXE to accept vlan 0 or the customer will need to use different equipment for the MAAS nodes.
I found a conversation on the ipxe-devel mailing list that suggested a commit was submitted and signed off but that was from 2016 so I'm not sure what became of it. Notable messages in the thread:
http://lists.ipxe.org/pipermail/ipxe-devel/2016-April/004916.html
http://lists.ipxe.org/pipermail/ipxe-devel/2016-July/005099.html
Would it be possible to install a local patch as part of the FCE deployment? I suspect the patch(es) mentioned in the above thread would require some modification to apply properly. |
|
2018-12-11 15:14:16 |
Christian Ehrhardt |
description |
[Impact]
* VLAN 0 is special (for QoS actually, not a real VLAN)
* Some components in the stack accidentially strip it, so does ipxe in
this case.
* Fix by porting a fix that is carried by other distributions as upstrem
didn't follow the sugegstion but it is needed for the use case affected
by the bug here (Thanks Andres)
[Test Case]
* TODO
[Regression Potential]
* The onyl refernce to VLAN tags on iPXE boto that we found was on iBFT
boot for SCSI, we tested that in comment #34 and it still worked fine.
* We didn't see such cases on reivew, but there might be use cases that
made some unexpected use of the headers which are now stripped. But
that seems wrong.
[Other Info]
* n/a
---
I have three MAAS rack/region nodes which are blades in a Cisco UCS chassis. This is an FCE deployment where MAAS has two DHCP servers, infra1 is the primary and infra3 is the secondary. The pod VMs on infra1 and infra3 PXE boot fine but the pod VMs on infra2 fail to PXE boot. If I reconfigure the subnet to provide DHCP on infra2 (either as primary or secondary) then the pod VMs on infra2 will PXE boot but the pod VMs on the demoted infra node (that no longer serves DHCP) now fail to PXE boot.
While commissioning a pod VM on infra2 I captured network traffic with tcpdump on the vnet interface.
Here is the dump when the PXE boot fails (no dhcp server on infra2):
https://pastebin.canonical.com/p/THW2gTSv4S/
Here is the dump when PXE boot succeeds (when infra2 is serving dhcp):
https://pastebin.canonical.com/p/HH3XvZtTGG/
The only difference I can see is that in the unsuccessful scenario, the reply is an 802.1q packet -- it's got a vlan tag for vlan 0. Normally vlan 0 traffic is passed as if it is not tagged and indeed, I can ping between the blades with no problem. Outgoing packets are untagged but incoming packets are tagged vlan 0 -- but the ping works. It seems vlan 0 is used as a part of 802.1p to set priority of packets. This is separate from vlan, it just happens to use that ethertype to do the priority tagging.
Someone confirmed to me that, in the iPXE source, it drops all packets if they are vlan tagged.
The customer is unable to figure out why the packets between blades is getting vlan tagged so we either need to figure out how to allow iPXE to accept vlan 0 or the customer will need to use different equipment for the MAAS nodes.
I found a conversation on the ipxe-devel mailing list that suggested a commit was submitted and signed off but that was from 2016 so I'm not sure what became of it. Notable messages in the thread:
http://lists.ipxe.org/pipermail/ipxe-devel/2016-April/004916.html
http://lists.ipxe.org/pipermail/ipxe-devel/2016-July/005099.html
Would it be possible to install a local patch as part of the FCE deployment? I suspect the patch(es) mentioned in the above thread would require some modification to apply properly. |
[Impact]
* VLAN 0 is special (for QoS actually, not a real VLAN)
* Some components in the stack accidentally strip it, so does ipxe in
this case.
* Fix by porting a fix that is carried by other distributions as upstream
didn't follow the suggestion but it is needed for the use case affected
by the bug here (Thanks Andres)
[Test Case]
* TODO
[Regression Potential]
* The only reference to VLAN tags on iPXE boot that we found was on iBFT
boot for SCSI, we tested that in comment #34 and it still worked fine.
* We didn't see such cases on review, but there might be use cases that
made some unexpected use of the headers which are now stripped. But
that seems wrong.
[Other Info]
* n/a
---
I have three MAAS rack/region nodes which are blades in a Cisco UCS chassis. This is an FCE deployment where MAAS has two DHCP servers, infra1 is the primary and infra3 is the secondary. The pod VMs on infra1 and infra3 PXE boot fine but the pod VMs on infra2 fail to PXE boot. If I reconfigure the subnet to provide DHCP on infra2 (either as primary or secondary) then the pod VMs on infra2 will PXE boot but the pod VMs on the demoted infra node (that no longer serves DHCP) now fail to PXE boot.
While commissioning a pod VM on infra2 I captured network traffic with tcpdump on the vnet interface.
Here is the dump when the PXE boot fails (no dhcp server on infra2):
https://pastebin.canonical.com/p/THW2gTSv4S/
Here is the dump when PXE boot succeeds (when infra2 is serving dhcp):
https://pastebin.canonical.com/p/HH3XvZtTGG/
The only difference I can see is that in the unsuccessful scenario, the reply is an 802.1q packet -- it's got a vlan tag for vlan 0. Normally vlan 0 traffic is passed as if it is not tagged and indeed, I can ping between the blades with no problem. Outgoing packets are untagged but incoming packets are tagged vlan 0 -- but the ping works. It seems vlan 0 is used as a part of 802.1p to set priority of packets. This is separate from vlan, it just happens to use that ethertype to do the priority tagging.
Someone confirmed to me that, in the iPXE source, it drops all packets if they are vlan tagged.
The customer is unable to figure out why the packets between blades is getting vlan tagged so we either need to figure out how to allow iPXE to accept vlan 0 or the customer will need to use different equipment for the MAAS nodes.
I found a conversation on the ipxe-devel mailing list that suggested a commit was submitted and signed off but that was from 2016 so I'm not sure what became of it. Notable messages in the thread:
http://lists.ipxe.org/pipermail/ipxe-devel/2016-April/004916.html
http://lists.ipxe.org/pipermail/ipxe-devel/2016-July/005099.html
Would it be possible to install a local patch as part of the FCE deployment? I suspect the patch(es) mentioned in the above thread would require some modification to apply properly. |
|
2018-12-12 11:17:34 |
Christian Ehrhardt |
ipxe (Ubuntu Bionic): status |
Triaged |
Incomplete |
|
2018-12-12 11:17:36 |
Christian Ehrhardt |
ipxe (Ubuntu Cosmic): status |
Triaged |
Incomplete |
|
2018-12-18 13:12:04 |
Christian Ehrhardt |
description |
[Impact]
* VLAN 0 is special (for QoS actually, not a real VLAN)
* Some components in the stack accidentally strip it, so does ipxe in
this case.
* Fix by porting a fix that is carried by other distributions as upstream
didn't follow the suggestion but it is needed for the use case affected
by the bug here (Thanks Andres)
[Test Case]
* TODO
[Regression Potential]
* The only reference to VLAN tags on iPXE boot that we found was on iBFT
boot for SCSI, we tested that in comment #34 and it still worked fine.
* We didn't see such cases on review, but there might be use cases that
made some unexpected use of the headers which are now stripped. But
that seems wrong.
[Other Info]
* n/a
---
I have three MAAS rack/region nodes which are blades in a Cisco UCS chassis. This is an FCE deployment where MAAS has two DHCP servers, infra1 is the primary and infra3 is the secondary. The pod VMs on infra1 and infra3 PXE boot fine but the pod VMs on infra2 fail to PXE boot. If I reconfigure the subnet to provide DHCP on infra2 (either as primary or secondary) then the pod VMs on infra2 will PXE boot but the pod VMs on the demoted infra node (that no longer serves DHCP) now fail to PXE boot.
While commissioning a pod VM on infra2 I captured network traffic with tcpdump on the vnet interface.
Here is the dump when the PXE boot fails (no dhcp server on infra2):
https://pastebin.canonical.com/p/THW2gTSv4S/
Here is the dump when PXE boot succeeds (when infra2 is serving dhcp):
https://pastebin.canonical.com/p/HH3XvZtTGG/
The only difference I can see is that in the unsuccessful scenario, the reply is an 802.1q packet -- it's got a vlan tag for vlan 0. Normally vlan 0 traffic is passed as if it is not tagged and indeed, I can ping between the blades with no problem. Outgoing packets are untagged but incoming packets are tagged vlan 0 -- but the ping works. It seems vlan 0 is used as a part of 802.1p to set priority of packets. This is separate from vlan, it just happens to use that ethertype to do the priority tagging.
Someone confirmed to me that, in the iPXE source, it drops all packets if they are vlan tagged.
The customer is unable to figure out why the packets between blades is getting vlan tagged so we either need to figure out how to allow iPXE to accept vlan 0 or the customer will need to use different equipment for the MAAS nodes.
I found a conversation on the ipxe-devel mailing list that suggested a commit was submitted and signed off but that was from 2016 so I'm not sure what became of it. Notable messages in the thread:
http://lists.ipxe.org/pipermail/ipxe-devel/2016-April/004916.html
http://lists.ipxe.org/pipermail/ipxe-devel/2016-July/005099.html
Would it be possible to install a local patch as part of the FCE deployment? I suspect the patch(es) mentioned in the above thread would require some modification to apply properly. |
[Impact]
* VLAN 0 is special (for QoS actually, not a real VLAN)
* Some components in the stack accidentally strip it, so does ipxe in
this case.
* Fix by porting a fix that is carried by other distributions as upstream
didn't follow the suggestion but it is needed for the use case affected
by the bug here (Thanks Andres)
[Test Case]
* Comment #42 contains a virtual test setup to understand the case but it
does NOT trigger the isse. That requires special switch HW that adds
VLAN 0 tags for QoS. Therefore Vern (reporter) will test that on a
customer site with such hardware being affected by this issue.
[Regression Potential]
* The only reference to VLAN tags on iPXE boot that we found was on iBFT
boot for SCSI, we tested that in comment #34 and it still worked fine.
* We didn't see such cases on review, but there might be use cases that
made some unexpected use of the headers which are now stripped. But
that seems wrong.
[Other Info]
* n/a
---
I have three MAAS rack/region nodes which are blades in a Cisco UCS chassis. This is an FCE deployment where MAAS has two DHCP servers, infra1 is the primary and infra3 is the secondary. The pod VMs on infra1 and infra3 PXE boot fine but the pod VMs on infra2 fail to PXE boot. If I reconfigure the subnet to provide DHCP on infra2 (either as primary or secondary) then the pod VMs on infra2 will PXE boot but the pod VMs on the demoted infra node (that no longer serves DHCP) now fail to PXE boot.
While commissioning a pod VM on infra2 I captured network traffic with tcpdump on the vnet interface.
Here is the dump when the PXE boot fails (no dhcp server on infra2):
https://pastebin.canonical.com/p/THW2gTSv4S/
Here is the dump when PXE boot succeeds (when infra2 is serving dhcp):
https://pastebin.canonical.com/p/HH3XvZtTGG/
The only difference I can see is that in the unsuccessful scenario, the reply is an 802.1q packet -- it's got a vlan tag for vlan 0. Normally vlan 0 traffic is passed as if it is not tagged and indeed, I can ping between the blades with no problem. Outgoing packets are untagged but incoming packets are tagged vlan 0 -- but the ping works. It seems vlan 0 is used as a part of 802.1p to set priority of packets. This is separate from vlan, it just happens to use that ethertype to do the priority tagging.
Someone confirmed to me that, in the iPXE source, it drops all packets if they are vlan tagged.
The customer is unable to figure out why the packets between blades is getting vlan tagged so we either need to figure out how to allow iPXE to accept vlan 0 or the customer will need to use different equipment for the MAAS nodes.
I found a conversation on the ipxe-devel mailing list that suggested a commit was submitted and signed off but that was from 2016 so I'm not sure what became of it. Notable messages in the thread:
http://lists.ipxe.org/pipermail/ipxe-devel/2016-April/004916.html
http://lists.ipxe.org/pipermail/ipxe-devel/2016-July/005099.html
Would it be possible to install a local patch as part of the FCE deployment? I suspect the patch(es) mentioned in the above thread would require some modification to apply properly. |
|
2019-01-11 19:48:23 |
Vern Hart |
attachment added |
|
screenshot of failed boot https://bugs.launchpad.net/maas/+bug/1805920/+attachment/5228521/+files/Screen%20Shot%202019-01-11%20at%207.36.20%20PM.png |
|
2019-01-15 20:11:07 |
Brian Murray |
ipxe (Ubuntu Cosmic): status |
Incomplete |
Fix Committed |
|
2019-01-15 20:11:10 |
Brian Murray |
bug |
|
|
added subscriber Ubuntu Stable Release Updates Team |
2019-01-15 20:11:14 |
Brian Murray |
bug |
|
|
added subscriber SRU Verification |
2019-01-15 20:11:20 |
Brian Murray |
tags |
field-critical |
field-critical verification-needed verification-needed-cosmic |
|
2019-01-15 20:26:40 |
Brian Murray |
ipxe (Ubuntu Bionic): status |
Incomplete |
Fix Committed |
|
2019-01-15 20:26:47 |
Brian Murray |
tags |
field-critical verification-needed verification-needed-cosmic |
field-critical verification-needed verification-needed-bionic verification-needed-cosmic |
|
2019-02-27 10:22:33 |
Vern Hart |
tags |
field-critical verification-needed verification-needed-bionic verification-needed-cosmic |
field-critical verification-done-bionic verification-done-cosmic verification-needed |
|
2019-02-27 10:23:01 |
Vern Hart |
tags |
field-critical verification-done-bionic verification-done-cosmic verification-needed |
field-critical verification-done verification-done-bionic verification-done-cosmic |
|
2019-03-04 16:32:54 |
Łukasz Zemczak |
removed subscriber Ubuntu Stable Release Updates Team |
|
|
|
2019-03-04 16:33:13 |
Launchpad Janitor |
ipxe (Ubuntu Cosmic): status |
Fix Committed |
Fix Released |
|
2019-03-04 16:46:13 |
Launchpad Janitor |
ipxe (Ubuntu Bionic): status |
Fix Committed |
Fix Released |
|
2019-07-24 21:31:10 |
Brad Figg |
tags |
field-critical verification-done verification-done-bionic verification-done-cosmic |
cscc field-critical verification-done verification-done-bionic verification-done-cosmic |
|
2020-06-25 07:21:34 |
Launchpad Janitor |
merge proposal linked |
|
https://code.launchpad.net/~paelzer/ubuntu/+source/ipxe/+git/ipxe/+merge/386372 |
|
2020-06-25 07:21:52 |
Christian Ehrhardt |
merge proposal unlinked |
https://code.launchpad.net/~paelzer/ubuntu/+source/ipxe/+git/ipxe/+merge/386372 |
|
|
2020-06-29 12:53:30 |
Launchpad Janitor |
merge proposal linked |
|
https://code.launchpad.net/~paelzer/ubuntu/+source/ipxe/+git/ipxe/+merge/386372 |
|
2020-06-30 08:21:47 |
Launchpad Janitor |
merge proposal unlinked |
https://code.launchpad.net/~paelzer/ubuntu/+source/ipxe/+git/ipxe/+merge/386372 |
|
|
2021-05-05 06:54:44 |
Launchpad Janitor |
merge proposal linked |
|
https://code.launchpad.net/~paelzer/ubuntu/+source/ipxe/+git/ipxe/+merge/402236 |
|