Rebuild server with NUMATopologyFilter enabled fails (in some cases)

Bug #1804502 reported by Inbar Stolberg
90
This bug affects 21 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Fix Released
Medium
sean mooney
Queens
In Progress
Low
sean mooney
Rocky
Fix Released
Medium
Lee Yarwood
Stein
Fix Committed
Medium
sean mooney
Train
Fix Committed
Medium
sean mooney

Bug Description

Description
===========
server rebuild will fail in nova scheduler on NUMATopologyFilter if the computes do not have enough capacity (even though clearly the running server is already accounted into that calculation)

to resolve the issue a fix is required in NUMATopologyFilter to not perform the rebuild operation in the case that the request is due to rebuild.

the result of such a case will be that server rebuild will fail with error of "no valid host found"

(do not mix resize with rebuild functions...)

Steps to reproduce
==================

1. create a flavor that contain metadata that will point to a specific compute (use host aggregate with same key:value metadata
make sure flavor contain topology related metadata:
hw:cpu_cores='1', hw:cpu_policy='dedicated', hw:cpu_sockets='6', hw:cpu_thread_policy='prefer', hw:cpu_threads='1', hw:mem_page_size='large', location='area51'

2. create a server on that compute (preferably using heat stack)
3. (try to) rebuild the server using stack update
4. issue reproduced

Expected result
===============
server in an active running state (if image was replaced in the rebuild command than with a reference to the new image in the server details.

Actual result
=============
server in error state with error of no valid host found.

Message
No valid host was found. There are not enough hosts available.
Code
500
Details
File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 966, in rebuild_instance return_alternates=False) File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 723, in _schedule_instances return_alternates=return_alternates) File "/usr/lib/python2.7/site-packages/nova/scheduler/utils.py", line 907, in wrapped return func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 53, in select_destinations instance_uuids, return_objects, return_alternates) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method return getattr(self.instance, __name)(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py", line 42, in select_destinations instance_uuids, return_objects, return_alternates) File "/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py", line 158, in select_destinations return cctxt.call(ctxt, 'select_destinations', **msg_args) File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 179, in call retry=self.retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 133, in _send retry=retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 584, in send call_monitor_timeout, retry=retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 575, in _send raise result

Environment
===========
detected in Rocky release

KVM hypervisor

Ceph storage

Neutron networks

Logs & Configs
==============
in nova.conf:
enabled_filters=AggregateInstanceExtraSpecsFilter,RetryFilter,AvailabilityZoneFilter,NUMATopologyFilter,PciPassthroughFilter,RamFilter,ComputeFilter,ImagePropertiesFilter,CoreFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,DiskFilter,ComputeCapabilitiesFilter,AggregateRamFilter,SameHostFilter,DifferentHostFilter

logs: tbd

Changed in nova:
assignee: nobody → Inbar Stolberg (inbarsto)
description: updated
tags: added: numa scheduler
description: updated
Yossi Ovadia (jabadia)
Changed in nova:
status: New → Confirmed
Changed in nova:
status: Confirmed → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (master)

Fix proposed to branch: master
Review: https://review.openstack.org/629646

Revision history for this message
Inbar Stolberg (inbarsto) wrote :
Revision history for this message
sean mooney (sean-k-mooney) wrote :

as per my comment https://review.openstack.org/#/c/629646/4/nova/scheduler/filters/numa_topology_filter.py@103
we cannont skip valdiating the numa toplogy of a host on rebuild as the image can alter the guest numa toplogy.

if we rebuilt with the same image we skip going back to the schduler so the only time we go to the schduler on rebuild if if the image changed which means we cannot assume there is enough space on the current host.

as presented this bug is invalid however if you can present a way to re validate that the existing numa toplogy is valid with the new image instad of just skiping that may be reasonable.

Changed in nova:
status: In Progress → Invalid
Revision history for this message
Inbar Stolberg (inbarsto) wrote :

@sean-k-mooney the bug still exist so please don't disqualify the bug if you don't like the PR attached to it.

also the PR solves most of the issue without causing new issues, the only scenario it does not fix is the one you rightfully mentioned but to solve it will require an extremely large change and it is not likely that it will be done any time soon.

please reconsider the PR (as mentioned it fixes some not all of the cases).

Changed in nova:
status: Invalid → In Progress
Changed in nova:
status: In Progress → Confirmed
Revision history for this message
David Hill (david-hill-ubisoft) wrote :

Couldn't we simply skip scheduling if the images properties remained unchanged ? I mean, let's say I have RHEL 7.5 image with a given metadata, create a new RHEL 7.6 image containing the excat same metadata, why should we go through scheduling back again ? This is an issue for customer lacking resources ...

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Fix proposed to branch: master
Review: https://review.opendev.org/661503

Changed in nova:
assignee: Inbar Stolberg (inbarsto) → David Hill (david-hill-ubisoft)
status: Confirmed → In Progress
Revision history for this message
sean mooney (sean-k-mooney) wrote :

@inbar stoberg
i was not disqualifying it as invalid because i did not like the proposed change.
i marked it as invalid as in place rebuild for instance with a numa toplogy has never
been supported so this is a new feature not a bug.

the numa topogy filter works by delegating to the nova.virt.hardware.py to calculate
a new cpu and memory assiginment for an instance on a given host. if the hardware module
is able to calualted an assignemnt give the constratits of the image and flaovr then
the filter reports the host passes.

the hardware module does not have the concept of a rebuild so it always calulates the assignment
as if it was a new instance. to make the filter work in an inplace rebuild case woudl require
the hardware module to be extended to be able to revalidate the exstitng assignment in the resouce track.

@david hill
ack it is but it is something that has never been supported so its not a bug.
that said your could work but i have left a review pointing out that you are checking the wrong
image properties. if you generalise your approch to check all image properties that start with
hw_numa and hw_cpu + hw_mem_page_size it might be workable.

Matt Riedemann (mriedem)
tags: added: rebuild
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on nova (master)

Change abandoned by Matt Riedemann (<email address hidden>) on branch: master
Review: https://review.opendev.org/629646
Reason: This looks abandoned so I'm going to abandon it. I didn't read all of the details, but ignoring this filter during rebuild if the image changes risks just pushing the failure down to the compute since we don't actually claim for rebuild in the compute which is bug 1763766.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Change abandoned by Matt Riedemann (<email address hidden>) on branch: master
Review: https://review.opendev.org/661503

Matt Riedemann (mriedem)
Changed in nova:
status: In Progress → Confirmed
Matt Riedemann (mriedem)
Changed in nova:
assignee: David Hill (david-hill-ubisoft) → nobody
importance: Undecided → Medium
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (master)

Fix proposed to branch: master
Review: https://review.opendev.org/689861

Changed in nova:
assignee: nobody → sean mooney (sean-k-mooney)
status: Confirmed → In Progress
Revision history for this message
Inbar Stolberg (inbarsto) wrote :

please see comments on PR: https://review.opendev.org/661503

PR contains same logic as https://review.opendev.org/689861 and was rejected due to providing only a partial fix.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (master)

Reviewed: https://review.opendev.org/689861
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=3f9411071d4c1a04ab0b68fd635597bf6959c0ca
Submitter: Zuul
Branch: master

commit 3f9411071d4c1a04ab0b68fd635597bf6959c0ca
Author: Sean Mooney <email address hidden>
Date: Mon Oct 21 16:17:17 2019 +0000

    Disable NUMATopologyFilter on rebuild

    This change leverages the new NUMA constraint checking added in
    in I0322d872bdff68936033a6f5a54e8296a6fb3434 to allow the
    NUMATopologyFilter to be skipped on rebuild.

    As the new behavior of rebuild enfroces that no changes
    to the numa constraints are allowed on rebuild we no longer
    need to execute the NUMATopologyFilter. Previously
    the NUMATopologyFilter would process the rebuild request
    as if it was a request to spawn a new instnace as the
    numa_fit_instance_to_host function is not rebuild aware.

    As such prior to this change a rebuild would only succeed
    if a host had enough additional capacity for a second instance
    on the same host meeting the requirement of the new image and
    existing flavor. This behavior was incorrect on two counts as
    a rebuild uses a noop claim. First the resouce usage cannot
    change so it was incorrect to require the addtional capacity
    to rebuild an instance. Secondly it was incorrect not to assert
    the resouce usage remained the same.

    I0322d872bdff68936033a6f5a54e8296a6fb3434 adressed guarding the
    rebuild against altering the resouce usage and this change
    allows in place rebuild.

    This change found a latent bug that will be adressed in a follow
    up change and updated the functional tests to note the incorrect
    behavior.

    Change-Id: I48bccc4b9adcac3c7a3e42769c11fdeb8f6fd132
    Closes-Bug: #1804502
    Implements: blueprint inplace-rebuild-of-numa-instances

Changed in nova:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to nova (master)

Related fix proposed to branch: master
Review: https://review.opendev.org/698260

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (stable/train)

Fix proposed to branch: stable/train
Review: https://review.opendev.org/698532

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to nova (stable/train)

Related fix proposed to branch: stable/train
Review: https://review.opendev.org/700127

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to nova (master)

Reviewed: https://review.opendev.org/698260
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=f6060ab6b54261ff50b8068732f6e509619d713e
Submitter: Zuul
Branch: master

commit f6060ab6b54261ff50b8068732f6e509619d713e
Author: Sean Mooney <email address hidden>
Date: Tue Dec 10 14:20:33 2019 +0000

    FUP for in-place numa rebuild

    This patch addresses a number of typos and minor
    issues raised during review of [1][2]. A summary
    of the changes are corrections to typos in comments,
    a correction to the exception message, an update to
    the release note and the addition of debug logging.

    [1] I0322d872bdff68936033a6f5a54e8296a6fb3434
    [2] I48bccc4b9adcac3c7a3e42769c11fdeb8f6fd132

    Related-Bug: #1804502
    Related-Bug: #1763766

    Change-Id: I8975e524cd5a9c7dfb065bb2dc8ceb03f1b89e7b

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (stable/train)

Reviewed: https://review.opendev.org/698532
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=94c0362918169a1fa06aa6cf5a483e9285d7b91f
Submitter: Zuul
Branch: stable/train

commit 94c0362918169a1fa06aa6cf5a483e9285d7b91f
Author: Sean Mooney <email address hidden>
Date: Mon Oct 21 16:17:17 2019 +0000

    Disable NUMATopologyFilter on rebuild

    This change leverages the new NUMA constraint checking added in
    in I0322d872bdff68936033a6f5a54e8296a6fb3434 to allow the
    NUMATopologyFilter to be skipped on rebuild.

    As the new behavior of rebuild enfroces that no changes
    to the numa constraints are allowed on rebuild we no longer
    need to execute the NUMATopologyFilter. Previously
    the NUMATopologyFilter would process the rebuild request
    as if it was a request to spawn a new instnace as the
    numa_fit_instance_to_host function is not rebuild aware.

    As such prior to this change a rebuild would only succeed
    if a host had enough additional capacity for a second instance
    on the same host meeting the requirement of the new image and
    existing flavor. This behavior was incorrect on two counts as
    a rebuild uses a noop claim. First the resouce usage cannot
    change so it was incorrect to require the addtional capacity
    to rebuild an instance. Secondly it was incorrect not to assert
    the resouce usage remained the same.

    I0322d872bdff68936033a6f5a54e8296a6fb3434 adressed guarding the
    rebuild against altering the resouce usage and this change
    allows in place rebuild.

    This change found a latent bug that will be adressed in a follow
    up change and updated the functional tests to note the incorrect
    behavior.

    Change-Id: I48bccc4b9adcac3c7a3e42769c11fdeb8f6fd132
    Closes-Bug: #1804502
    Implements: blueprint inplace-rebuild-of-numa-instances
    (cherry picked from commit 3f9411071d4c1a04ab0b68fd635597bf6959c0ca)

tags: added: in-stable-train
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to nova (stable/train)

Reviewed: https://review.opendev.org/700127
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=48bb9a9663374936221144bb6a24688128a51146
Submitter: Zuul
Branch: stable/train

commit 48bb9a9663374936221144bb6a24688128a51146
Author: Sean Mooney <email address hidden>
Date: Tue Dec 10 14:20:33 2019 +0000

    FUP for in-place numa rebuild

    This patch addresses a number of typos and minor
    issues raised during review of [1][2]. A summary
    of the changes are corrections to typos in comments,
    a correction to the exception message, an update to
    the release note and the addition of debug logging.

    [1] I0322d872bdff68936033a6f5a54e8296a6fb3434
    [2] I48bccc4b9adcac3c7a3e42769c11fdeb8f6fd132

    Related-Bug: #1804502
    Related-Bug: #1763766

    Change-Id: I8975e524cd5a9c7dfb065bb2dc8ceb03f1b89e7b
    (cherry picked from commit f6060ab6b54261ff50b8068732f6e509619d713e)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (stable/stein)

Fix proposed to branch: stable/stein
Review: https://review.opendev.org/702973

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to nova (stable/stein)

Related fix proposed to branch: stable/stein
Review: https://review.opendev.org/702974

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (stable/rocky)

Fix proposed to branch: stable/rocky
Review: https://review.opendev.org/703117

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to nova (stable/rocky)

Related fix proposed to branch: stable/rocky
Review: https://review.opendev.org/703118

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (stable/queens)

Fix proposed to branch: stable/queens
Review: https://review.opendev.org/703141

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to nova (stable/queens)

Related fix proposed to branch: stable/queens
Review: https://review.opendev.org/703142

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (stable/stein)

Reviewed: https://review.opendev.org/702973
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=4a691c33d13611714135b9390cb53de726fc901d
Submitter: Zuul
Branch: stable/stein

commit 4a691c33d13611714135b9390cb53de726fc901d
Author: Sean Mooney <email address hidden>
Date: Mon Oct 21 16:17:17 2019 +0000

    Disable NUMATopologyFilter on rebuild

    This change leverages the new NUMA constraint checking added in
    in I0322d872bdff68936033a6f5a54e8296a6fb3434 to allow the
    NUMATopologyFilter to be skipped on rebuild.

    As the new behavior of rebuild enfroces that no changes
    to the numa constraints are allowed on rebuild we no longer
    need to execute the NUMATopologyFilter. Previously
    the NUMATopologyFilter would process the rebuild request
    as if it was a request to spawn a new instnace as the
    numa_fit_instance_to_host function is not rebuild aware.

    As such prior to this change a rebuild would only succeed
    if a host had enough additional capacity for a second instance
    on the same host meeting the requirement of the new image and
    existing flavor. This behavior was incorrect on two counts as
    a rebuild uses a noop claim. First the resouce usage cannot
    change so it was incorrect to require the addtional capacity
    to rebuild an instance. Secondly it was incorrect not to assert
    the resouce usage remained the same.

    I0322d872bdff68936033a6f5a54e8296a6fb3434 adressed guarding the
    rebuild against altering the resouce usage and this change
    allows in place rebuild.

    This change found a latent bug that will be adressed in a follow
    up change and updated the functional tests to note the incorrect
    behavior.

    Change-Id: I48bccc4b9adcac3c7a3e42769c11fdeb8f6fd132
    Closes-Bug: #1804502
    Implements: blueprint inplace-rebuild-of-numa-instances
    (cherry picked from commit 3f9411071d4c1a04ab0b68fd635597bf6959c0ca)
    (cherry picked from commit 94c0362918169a1fa06aa6cf5a483e9285d7b91f)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to nova (stable/stein)

Reviewed: https://review.opendev.org/702974
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=8346c527b379395851a9de063b4978b489076bf6
Submitter: Zuul
Branch: stable/stein

commit 8346c527b379395851a9de063b4978b489076bf6
Author: Sean Mooney <email address hidden>
Date: Tue Dec 10 14:20:33 2019 +0000

    FUP for in-place numa rebuild

    This patch addresses a number of typos and minor
    issues raised during review of [1][2]. A summary
    of the changes are corrections to typos in comments,
    a correction to the exception message, an update to
    the release note and the addition of debug logging.

    [1] I0322d872bdff68936033a6f5a54e8296a6fb3434
    [2] I48bccc4b9adcac3c7a3e42769c11fdeb8f6fd132

    Related-Bug: #1804502
    Related-Bug: #1763766

    Conflicts:
        nova/tests/functional/libvirt/test_numa_servers.py
    NOTE(sean-k-mooney): conflict was due to the use of
    NUMAHostInfo instead of HostInfo.

    Change-Id: I8975e524cd5a9c7dfb065bb2dc8ceb03f1b89e7b
    (cherry picked from commit f6060ab6b54261ff50b8068732f6e509619d713e)
    (cherry picked from commit 48bb9a9663374936221144bb6a24688128a51146)

tags: added: in-stable-stein
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/nova 20.1.0

This issue was fixed in the openstack/nova 20.1.0 release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/nova 19.1.0

This issue was fixed in the openstack/nova 19.1.0 release.

Revision history for this message
Laurent Dumont (baconpackets) wrote :

Hey everyone,

We are tracking down a similar issue where a in-place rebuild through Heat might fail depending on the resources in used by other instances on the compute. I'm trying to get a reproducible scenario but I'm unable to.

Is there any specific combination of NUMA topology, SRIOV that triggers this?

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (stable/rocky)

Reviewed: https://review.opendev.org/703117
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=f08d0ccf844e127f693cfc5498a205b13c873833
Submitter: Zuul
Branch: stable/rocky

commit f08d0ccf844e127f693cfc5498a205b13c873833
Author: Sean Mooney <email address hidden>
Date: Mon Oct 21 16:17:17 2019 +0000

    Disable NUMATopologyFilter on rebuild

    This change leverages the new NUMA constraint checking added in
    in I0322d872bdff68936033a6f5a54e8296a6fb3434 to allow the
    NUMATopologyFilter to be skipped on rebuild.

    As the new behavior of rebuild enfroces that no changes
    to the numa constraints are allowed on rebuild we no longer
    need to execute the NUMATopologyFilter. Previously
    the NUMATopologyFilter would process the rebuild request
    as if it was a request to spawn a new instnace as the
    numa_fit_instance_to_host function is not rebuild aware.

    As such prior to this change a rebuild would only succeed
    if a host had enough additional capacity for a second instance
    on the same host meeting the requirement of the new image and
    existing flavor. This behavior was incorrect on two counts as
    a rebuild uses a noop claim. First the resouce usage cannot
    change so it was incorrect to require the addtional capacity
    to rebuild an instance. Secondly it was incorrect not to assert
    the resouce usage remained the same.

    I0322d872bdff68936033a6f5a54e8296a6fb3434 adressed guarding the
    rebuild against altering the resouce usage and this change
    allows in place rebuild.

    This change found a latent bug that will be adressed in a follow
    up change and updated the functional tests to note the incorrect
    behavior.

    Conflicts:
        nova/tests/functional/libvirt/test_numa_servers.py

    NOTE(sean-k-mooney): Trivial import conflicts

    Change-Id: I48bccc4b9adcac3c7a3e42769c11fdeb8f6fd132
    Closes-Bug: #1804502
    Implements: blueprint inplace-rebuild-of-numa-instances
    (cherry picked from commit 3f9411071d4c1a04ab0b68fd635597bf6959c0ca)
    (cherry picked from commit 94c0362918169a1fa06aa6cf5a483e9285d7b91f)
    (cherry picked from commit 4a691c33d13611714135b9390cb53de726fc901d)

tags: added: in-stable-rocky
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to nova (stable/rocky)

Reviewed: https://review.opendev.org/703118
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=84c63816602dcdf91885d20bb5d26cec336fb71e
Submitter: Zuul
Branch: stable/rocky

commit 84c63816602dcdf91885d20bb5d26cec336fb71e
Author: Sean Mooney <email address hidden>
Date: Tue Dec 10 14:20:33 2019 +0000

    FUP for in-place numa rebuild

    This patch addresses a number of typos and minor
    issues raised during review of [1][2]. A summary
    of the changes are corrections to typos in comments,
    a correction to the exception message, an update to
    the release note and the addition of debug logging.

    [1] I0322d872bdff68936033a6f5a54e8296a6fb3434
    [2] I48bccc4b9adcac3c7a3e42769c11fdeb8f6fd132

    Change-Id: I8975e524cd5a9c7dfb065bb2dc8ceb03f1b89e7b
    Related-Bug: #1804502
    Related-Bug: #1763766
    (cherry picked from commit f6060ab6b54261ff50b8068732f6e509619d713e)
    (cherry picked from commit 48bb9a9663374936221144bb6a24688128a51146)
    (cherry picked from commit 8346c527b379395851a9de063b4978b489076bf6)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/nova rocky-eol

This issue was fixed in the openstack/nova rocky-eol release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on nova (stable/queens)

Change abandoned by "Elod Illes <email address hidden>" on branch: stable/queens
Review: https://review.opendev.org/c/openstack/nova/+/703141
Reason: This branch transitioned to End of Life for this project, open patches needs to be closed to be able to delete the branch.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Change abandoned by "Elod Illes <email address hidden>" on branch: stable/queens
Review: https://review.opendev.org/c/openstack/nova/+/703142
Reason: This branch transitioned to End of Life for this project, open patches needs to be closed to be able to delete the branch.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.