2020-07-14 13:30:18 |
Dan Streetman |
bug |
|
|
added bug |
2020-07-14 13:30:36 |
Dan Streetman |
nominated for series |
|
Ubuntu Bionic |
|
2020-07-14 13:30:36 |
Dan Streetman |
bug task added |
|
qemu (Ubuntu Bionic) |
|
2020-07-14 13:30:42 |
Dan Streetman |
qemu (Ubuntu Bionic): assignee |
|
Dan Streetman (ddstreet) |
|
2020-07-14 13:30:43 |
Dan Streetman |
qemu (Ubuntu Bionic): importance |
Undecided |
Medium |
|
2020-07-14 13:30:46 |
Dan Streetman |
qemu (Ubuntu Bionic): status |
New |
In Progress |
|
2020-07-14 13:30:49 |
Dan Streetman |
qemu (Ubuntu): status |
New |
Fix Released |
|
2020-07-14 14:06:33 |
Dan Streetman |
description |
[impact]
the impact is the same as bug 1886704, the qemu vhost-user driver fails to init. see that bug for more details on the impact.
[test case]
start a qemu guest with at least one vhost-user interface, and more than 8 discontiguous memory regions. The vhost-user device will fail to init due to exceeding its max memory region limit.
[regression potential]
as this causes vhost-user to ignore some mem regions, any regression would likely involve problems with the vhost-user interface; possibly failure to init the interface, or failure to configure the interface, or problems while using the interface.
[scope]
TBD
[other info]
this is closely related to bug 1886704, but that bug is specifically about the 8 mem region limit of the vhost-user api. This bug doesn't attempt to fix that limitation (as it requires using a new extension of the vhost-user api to increase the max mem regions), this only backports existing upstream patches that fix the vhost region calculations and allow the vhost-user driver to indicate which mem regions it doesn't need to use, so those are ignored, in order to keep the total under the vhost-user limit. |
[impact]
the impact is the same as bug 1886704, the qemu vhost-user driver fails to init. see that bug for more details on the impact.
[test case]
start a qemu guest with at least one vhost-user interface, and more than 8 discontiguous memory regions. The vhost-user device will fail to init due to exceeding its max memory region limit.
[regression potential]
as this causes vhost-user to ignore some mem regions, any regression would likely involve problems with the vhost-user interface; possibly failure to init the interface, or failure to configure the interface, or problems while using the interface.
[scope]
this is needed for bionic.
this is fixed upstream by commits 9e2a2a3e083fec1e8059b331e3998c0849d779c1 and 988a27754bbbc45698f7acb54352e5a1ae699514, which are first included in v2.12.0 and v3.0.0, respectively, so this is fixed in focal and later.
I am not proposing this for xenial at this time, as there is more context difference and higher regression potential, and lack of anyone reporting to me the need for this fix when using xenial.
[other info]
this is closely related to bug 1886704, but that bug is specifically about the 8 mem region limit of the vhost-user api. This bug doesn't attempt to fix that limitation (as it requires using a new extension of the vhost-user api to increase the max mem regions), this only backports existing upstream patches that fix the vhost region calculations and allow the vhost-user driver to indicate which mem regions it doesn't need to use, so those are ignored, in order to keep the total under the vhost-user limit. |
|
2020-07-14 14:07:23 |
Dan Streetman |
description |
[impact]
the impact is the same as bug 1886704, the qemu vhost-user driver fails to init. see that bug for more details on the impact.
[test case]
start a qemu guest with at least one vhost-user interface, and more than 8 discontiguous memory regions. The vhost-user device will fail to init due to exceeding its max memory region limit.
[regression potential]
as this causes vhost-user to ignore some mem regions, any regression would likely involve problems with the vhost-user interface; possibly failure to init the interface, or failure to configure the interface, or problems while using the interface.
[scope]
this is needed for bionic.
this is fixed upstream by commits 9e2a2a3e083fec1e8059b331e3998c0849d779c1 and 988a27754bbbc45698f7acb54352e5a1ae699514, which are first included in v2.12.0 and v3.0.0, respectively, so this is fixed in focal and later.
I am not proposing this for xenial at this time, as there is more context difference and higher regression potential, and lack of anyone reporting to me the need for this fix when using xenial.
[other info]
this is closely related to bug 1886704, but that bug is specifically about the 8 mem region limit of the vhost-user api. This bug doesn't attempt to fix that limitation (as it requires using a new extension of the vhost-user api to increase the max mem regions), this only backports existing upstream patches that fix the vhost region calculations and allow the vhost-user driver to indicate which mem regions it doesn't need to use, so those are ignored, in order to keep the total under the vhost-user limit. |
[impact]
the impact is the same as bug 1886704, the qemu vhost-user driver fails to init. see that bug for more details on the impact.
[test case]
start a qemu guest with at least one vhost-user interface, and more than 8 discontiguous memory regions. The vhost-user device will fail to init due to exceeding its max memory region limit.
As I don't have a DPDK setup to reproduce this, I am relying on the reporter of this bug to me to test and verify.
[regression potential]
as this causes vhost-user to ignore some mem regions, any regression would likely involve problems with the vhost-user interface; possibly failure to init the interface, or failure to configure the interface, or problems while using the interface.
[scope]
this is needed for bionic.
this is fixed upstream by commits 9e2a2a3e083fec1e8059b331e3998c0849d779c1 and 988a27754bbbc45698f7acb54352e5a1ae699514, which are first included in v2.12.0 and v3.0.0, respectively, so this is fixed in focal and later.
I am not proposing this for xenial at this time, as there is more context difference and higher regression potential, and lack of anyone reporting to me the need for this fix when using xenial.
[other info]
this is closely related to bug 1886704, but that bug is specifically about the 8 mem region limit of the vhost-user api. This bug doesn't attempt to fix that limitation (as it requires using a new extension of the vhost-user api to increase the max mem regions), this only backports existing upstream patches that fix the vhost region calculations and allow the vhost-user driver to indicate which mem regions it doesn't need to use, so those are ignored, in order to keep the total under the vhost-user limit. |
|
2020-07-15 12:28:14 |
Robie Basak |
qemu (Ubuntu Bionic): status |
In Progress |
Fix Committed |
|
2020-07-15 12:28:15 |
Robie Basak |
bug |
|
|
added subscriber Ubuntu Stable Release Updates Team |
2020-07-15 12:28:16 |
Robie Basak |
bug |
|
|
added subscriber SRU Verification |
2020-07-15 12:28:19 |
Robie Basak |
tags |
|
verification-needed verification-needed-bionic |
|
2020-07-15 12:55:57 |
Dan Streetman |
description |
[impact]
the impact is the same as bug 1886704, the qemu vhost-user driver fails to init. see that bug for more details on the impact.
[test case]
start a qemu guest with at least one vhost-user interface, and more than 8 discontiguous memory regions. The vhost-user device will fail to init due to exceeding its max memory region limit.
As I don't have a DPDK setup to reproduce this, I am relying on the reporter of this bug to me to test and verify.
[regression potential]
as this causes vhost-user to ignore some mem regions, any regression would likely involve problems with the vhost-user interface; possibly failure to init the interface, or failure to configure the interface, or problems while using the interface.
[scope]
this is needed for bionic.
this is fixed upstream by commits 9e2a2a3e083fec1e8059b331e3998c0849d779c1 and 988a27754bbbc45698f7acb54352e5a1ae699514, which are first included in v2.12.0 and v3.0.0, respectively, so this is fixed in focal and later.
I am not proposing this for xenial at this time, as there is more context difference and higher regression potential, and lack of anyone reporting to me the need for this fix when using xenial.
[other info]
this is closely related to bug 1886704, but that bug is specifically about the 8 mem region limit of the vhost-user api. This bug doesn't attempt to fix that limitation (as it requires using a new extension of the vhost-user api to increase the max mem regions), this only backports existing upstream patches that fix the vhost region calculations and allow the vhost-user driver to indicate which mem regions it doesn't need to use, so those are ignored, in order to keep the total under the vhost-user limit. |
[impact]
the impact is the same as bug 1886704, the qemu vhost-user driver fails to init. see that bug for more details on the impact.
Because the vhost-user driver cannot dictate how many mem regions are present in the qemu guest, if the vhost-user driver calculates more than 8 regions at driver initialization time, this api limitation causes the qemu instance that is attempting to add/initialize a new vhost-user interface (nic) to fail, resulting in the qemu instance being unable to use the nic. Typically, this will mean that a qemu instance that is supposed to connect to DPDK-OVS is unable to, and has broken/missing networking, and in most cases is unusable.
[test case]
start a qemu guest with at least one vhost-user interface, and more than 8 discontiguous memory regions. The vhost-user device will fail to init due to exceeding its max memory region limit.
As I don't have a DPDK setup to reproduce this, I am relying on the reporter of this bug to me to test and verify.
[regression potential]
as this causes vhost-user to ignore some mem regions, any regression would likely involve problems with the vhost-user interface; possibly failure to init the interface, or failure to configure the interface, or problems while using the interface.
[scope]
this is needed for bionic.
this is fixed upstream by commits 9e2a2a3e083fec1e8059b331e3998c0849d779c1 and 988a27754bbbc45698f7acb54352e5a1ae699514, which are first included in v2.12.0 and v3.0.0, respectively, so this is fixed in focal and later.
I am not proposing this for xenial at this time, as there is more context difference and higher regression potential, and lack of anyone reporting to me the need for this fix when using xenial.
[other info]
this is closely related to bug 1886704, but that bug is specifically about the 8 mem region limit of the vhost-user api. This bug doesn't attempt to fix that limitation (as it requires using a new extension of the vhost-user api to increase the max mem regions), this only backports existing upstream patches that fix the vhost region calculations and allow the vhost-user driver to indicate which mem regions it doesn't need to use, so those are ignored, in order to keep the total under the vhost-user limit. |
|
2020-07-15 13:08:20 |
Dan Streetman |
description |
[impact]
the impact is the same as bug 1886704, the qemu vhost-user driver fails to init. see that bug for more details on the impact.
Because the vhost-user driver cannot dictate how many mem regions are present in the qemu guest, if the vhost-user driver calculates more than 8 regions at driver initialization time, this api limitation causes the qemu instance that is attempting to add/initialize a new vhost-user interface (nic) to fail, resulting in the qemu instance being unable to use the nic. Typically, this will mean that a qemu instance that is supposed to connect to DPDK-OVS is unable to, and has broken/missing networking, and in most cases is unusable.
[test case]
start a qemu guest with at least one vhost-user interface, and more than 8 discontiguous memory regions. The vhost-user device will fail to init due to exceeding its max memory region limit.
As I don't have a DPDK setup to reproduce this, I am relying on the reporter of this bug to me to test and verify.
[regression potential]
as this causes vhost-user to ignore some mem regions, any regression would likely involve problems with the vhost-user interface; possibly failure to init the interface, or failure to configure the interface, or problems while using the interface.
[scope]
this is needed for bionic.
this is fixed upstream by commits 9e2a2a3e083fec1e8059b331e3998c0849d779c1 and 988a27754bbbc45698f7acb54352e5a1ae699514, which are first included in v2.12.0 and v3.0.0, respectively, so this is fixed in focal and later.
I am not proposing this for xenial at this time, as there is more context difference and higher regression potential, and lack of anyone reporting to me the need for this fix when using xenial.
[other info]
this is closely related to bug 1886704, but that bug is specifically about the 8 mem region limit of the vhost-user api. This bug doesn't attempt to fix that limitation (as it requires using a new extension of the vhost-user api to increase the max mem regions), this only backports existing upstream patches that fix the vhost region calculations and allow the vhost-user driver to indicate which mem regions it doesn't need to use, so those are ignored, in order to keep the total under the vhost-user limit. |
[impact]
the impact is the same as bug 1886704, the qemu vhost-user driver fails to init. see that bug for more details on the impact.
Because the vhost-user driver cannot dictate how many mem regions are present in the qemu guest, if the vhost-user driver calculates more than 8 regions at driver initialization time, this api limitation causes the qemu instance that is attempting to add/initialize a new vhost-user interface (nic) to fail, resulting in the qemu instance being unable to use the nic. Typically, this will mean that a qemu instance that is supposed to connect to DPDK-OVS is unable to, and has broken/missing networking, and in most cases is unusable.
[test case]
start a qemu guest with at least one vhost-user interface (e.g. using DPDK-OVS), and more than 8 discontiguous memory regions. This might happen when using multiple PCI passthrough devices in combination with vhost-user interface(s). The vhost-user device will fail to init due to exceeding its max memory region limit.
As I don't have a DPDK setup to reproduce this, I am relying on the reporter of this bug to me to test and verify.
[regression potential]
as this causes vhost-user to ignore some mem regions, any regression would likely involve problems with the vhost-user interface; possibly failure to init the interface, or failure to configure the interface, or problems while using the interface.
[scope]
this is needed for bionic.
this is fixed upstream by commits 9e2a2a3e083fec1e8059b331e3998c0849d779c1 and 988a27754bbbc45698f7acb54352e5a1ae699514, which are first included in v2.12.0 and v3.0.0, respectively, so this is fixed in focal and later.
I am not proposing this for xenial at this time, as there is more context difference and higher regression potential, and lack of anyone reporting to me the need for this fix when using xenial.
[other info]
this is closely related to bug 1886704, but that bug is specifically about the 8 mem region limit of the vhost-user api. This bug doesn't attempt to fix that limitation (as it requires using a new extension of the vhost-user api to increase the max mem regions), this only backports existing upstream patches that fix the vhost region calculations and allow the vhost-user driver to indicate which mem regions it doesn't need to use, so those are ignored, in order to keep the total under the vhost-user limit. |
|
2020-07-16 16:50:28 |
Dan Streetman |
tags |
verification-needed verification-needed-bionic |
verification-done verification-done-bionic |
|
2020-07-23 17:11:21 |
Launchpad Janitor |
qemu (Ubuntu Bionic): status |
Fix Committed |
Fix Released |
|
2020-07-23 17:11:28 |
Ćukasz Zemczak |
removed subscriber Ubuntu Stable Release Updates Team |
|
|
|