ovn-octavia-provider does not report status correctly to octavia

Bug #1965772 reported by Gabriel Barazer
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
neutron
Fix Released
Undecided
Fernando Royo

Bug Description

Hi all,

The OVN Octavia provider does not report status correctly to Octavia due to a few bugs in the health monitoring implementation:

1) https://opendev.org/openstack/ovn-octavia-provider/src/commit/d6adbcef86e32bc7befbd5890a2bc79256b7a8e2/ovn_octavia_provider/helper.py#L2374 :
In _get_lb_on_hm_event, the request to the OVN NB API (db_find_rows) is incorrect:
        lbs = self.ovn_nbdb_api.db_find_rows(
            'Load_Balancer', (('ip_port_mappings', '=', mappings),
                              ('protocol', '=', row.protocol))).execute()

Should be :
        lbs = self.ovn_nbdb_api.db_find_rows(
            'Load_Balancer', ('ip_port_mappings', '=', mappings),
                              ('protocol', '=', row.protocol[0])).execute()

Note the removed extra parenthesis and the protocol string which is found in the first element of the protocol[] list.

2) https://opendev.org/openstack/ovn-octavia-provider/src/commit/d6adbcef86e32bc7befbd5890a2bc79256b7a8e2/ovn_octavia_provider/helper.py#L2426 :

There is a confusion with the Pool object returned by (pool = self._octavia_driver_lib.get_pool(pool_id)) : this object does not contain any operating_status attribute and it seems given the current state of the octavia-lib that it possible to set and update the status for a listener/pool/member but not possible to retrieve the current status.

See https://opendev.org/openstack/octavia-lib/src/branch/master/octavia_lib/api/drivers/data_models.py for the current Pool data model.

As a result, the computation done by _get_new_operating_statuses cannot use the current operating status to set a new operating status. It is still possible to set an operating status for the members by setting them to "OFFLINE" separately when a HM update event is fired.

3) The Load_Balancer_Health_Check NB entry creates the Service_Monitor SB entries, but there isn't any way to link the Service_Monitor entries created with the original NB entry. The result is that health monitor events received from the SB and processed by the octavia driver agent cannot be accurately matched with the correct octavia health monitor entry. If we have for example two load balancer entries using the same pool members and the same ports, only the first LB returned with db_find_rows would be updated (given the #2 bug is fixed). The case for having 2 load balancers with the same members is perfectly valid when using separate load balancers for public traffic (using a VIP from a public pool) and another one for internal/admin traffic (using a VIP from another pool, and with a source range whitelist). The code selecting only the first LB in that case is the same as bug #1.

tags: added: ovn-octavia-provider
Revision history for this message
Fernando Royo (froyoredhat) wrote :

Points 1) and 2) confirmed

Changed in neutron:
status: New → Confirmed
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to ovn-octavia-provider (master)
Changed in neutron:
status: Confirmed → In Progress
Revision history for this message
Michael Johnson (johnsom) wrote :

So looking at the code here, _get_new_operating_statuses is completely wrong and inefficient.

There should be no reason calculating the operating status to send back to the API should need any historical information. Operating status is the "observed" point-in-time status of the load balancer and it's child objects.

The OVN provider should be calculating the current status based on the current status of those objects in OVN.

Looking at the code here, you already have all of the required status information from OVN from the "ovn_lb.external_ids.items()" call. You just need to apply the business logic you want to use (in this case, since you are using the Octavia tempest suite, it will need to match the amphora driver).

The first step is evaluate the members:
Collect the status of your member objects from OVN
If they are administratively down, set it to OFFLINE
If they have no health monitor, set it to NO_MONITOR
If OVN supports "weight 0" draining (I don't think it does), set it to DRAINING
If the member is failing health checks, or otherwise not functioning, set it to ERROR

Step two, calculate the pool status:
If it is administratively down, set it to OFFLINE
If the pool has a capacity limit or any other OVN proprietary out-of-service, set it to ERROR
If all of the members are in ERROR (calculated above), set it to ERROR
If one or more of the members are ERROR, but not all, set it to DEGRADED
Otherwise, the pool is ONLINE

Step three, calculate the health monitor status:
If the health monitor is administratively down, set it to OFFLINE
If the health monitor is broken in some way, set it to ERROR
Otherwise the health monitor is ONLINE

Step four, calculate the listener status:
If the listener is administratively down, set it to OFFLINE
If the listener is out of capacity (i.e. at it's connection limit setting), set it to DEGRADED
If one or more associated pools are DEGRADED, set the listener to DEGRADED
If one or more of the pools is in ERROR, set the listener to ERROR
Otherwise, the listener is ONLINE

Step four, calculate the load balancer status:
If the load balancer is administratively down, set it to OFFLINE
If one or more of the listeners is DEGRADED, set it to DEGRADED
If one or more of the listeners is ERROR, set it to ERROR
Otherwise, the load balancer is ONLINE

That should provide you with everything you need to make the update_loadbalancer_status() call.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to ovn-octavia-provider (master)

Reviewed: https://review.opendev.org/c/openstack/ovn-octavia-provider/+/839055
Committed: https://opendev.org/openstack/ovn-octavia-provider/commit/61a56dc37754ac8daf8be33547b5774728020994
Submitter: "Zuul (22348)"
Branch: master

commit 61a56dc37754ac8daf8be33547b5774728020994
Author: Fernando Royo <email address hidden>
Date: Fri Apr 22 13:30:59 2022 +0200

    Fix request to OVN NB DB API

    Patch [1] introduces HM to OVN Octavia provider, by error it
    makes an wrong request to OVN NB DB API when a health_monitor
    event is received, in order to search the Load Balancer related
    with the event received. The request is done call using data from
    members related and the protocol (it is an array).

    This patch fix that request to be well formed.

    [1] https://review.opendev.org/c/openstack/ovn-octavia-provider/+/801890

    Partial-Bug: #1965772

    Change-Id: I1348704a7a0538f570237e9687f5d770159c1392

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to ovn-octavia-provider (stable/yoga)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to ovn-octavia-provider (stable/xena)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to ovn-octavia-provider (stable/wallaby)

Fix proposed to branch: stable/wallaby
Review: https://review.opendev.org/c/openstack/ovn-octavia-provider/+/844014

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to ovn-octavia-provider (master)

Reviewed: https://review.opendev.org/c/openstack/ovn-octavia-provider/+/843308
Committed: https://opendev.org/openstack/ovn-octavia-provider/commit/c478567b4efb669dfb429fb4f3e3bbdb41c622f7
Submitter: "Zuul (22348)"
Branch: master

commit c478567b4efb669dfb429fb4f3e3bbdb41c622f7
Author: Fernando Royo <email address hidden>
Date: Wed May 25 14:21:20 2022 +0200

    Fix way of calculate LB status after HM event

    Function _get_new_operating_statuses was calculating the
    LB status based on the received event together with
    current operating_status of the members, those values
    were not being stored and therefore were not available
    for the calculation.

    Now those member status are stored in the external_ids
    field related to the LB under the neutron:member_status tag,
    where the uuid of the member and its current status are
    stored.

    This way we can calculate the real-time status of the whole
    LB hierarchy by looking at the values stored there, which
    are updated when HM events are received.

    Partial-Bug: #1965772

    Depends-On: https://review.opendev.org/c/openstack/ovn-octavia-provider/+/839055
    Change-Id: I5f5225a94a9a8401d350d2fda987bf68869def22

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to ovn-octavia-provider (stable/yoga)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to ovn-octavia-provider (stable/xena)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to ovn-octavia-provider (stable/wallaby)

Fix proposed to branch: stable/wallaby
Review: https://review.opendev.org/c/openstack/ovn-octavia-provider/+/844262

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to ovn-octavia-provider (master)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to ovn-octavia-provider (stable/yoga)

Reviewed: https://review.opendev.org/c/openstack/ovn-octavia-provider/+/844012
Committed: https://opendev.org/openstack/ovn-octavia-provider/commit/e9733c73f6200150b1a6ffdb9fc63d65ea578f62
Submitter: "Zuul (22348)"
Branch: stable/yoga

commit e9733c73f6200150b1a6ffdb9fc63d65ea578f62
Author: Fernando Royo <email address hidden>
Date: Fri Apr 22 13:30:59 2022 +0200

    Fix request to OVN NB DB API

    Patch [1] introduces HM to OVN Octavia provider, by error it
    makes an wrong request to OVN NB DB API when a health_monitor
    event is received, in order to search the Load Balancer related
    with the event received. The request is done call using data from
    members related and the protocol (it is an array).

    This patch fix that request to be well formed.

    [1] https://review.opendev.org/c/openstack/ovn-octavia-provider/+/801890

    Partial-Bug: #1965772

    Change-Id: I1348704a7a0538f570237e9687f5d770159c1392
    (cherry picked from commit 61a56dc37754ac8daf8be33547b5774728020994)

tags: added: in-stable-yoga
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to ovn-octavia-provider (stable/wallaby)

Reviewed: https://review.opendev.org/c/openstack/ovn-octavia-provider/+/844014
Committed: https://opendev.org/openstack/ovn-octavia-provider/commit/a32db16cf53cde9498f9b73c874f49ac7a2a63a3
Submitter: "Zuul (22348)"
Branch: stable/wallaby

commit a32db16cf53cde9498f9b73c874f49ac7a2a63a3
Author: Fernando Royo <email address hidden>
Date: Fri Apr 22 13:30:59 2022 +0200

    Fix request to OVN NB DB API

    Patch [1] introduces HM to OVN Octavia provider, by error it
    makes an wrong request to OVN NB DB API when a health_monitor
    event is received, in order to search the Load Balancer related
    with the event received. The request is done call using data from
    members related and the protocol (it is an array).

    This patch fix that request to be well formed.

    [1] https://review.opendev.org/c/openstack/ovn-octavia-provider/+/801890

    Partial-Bug: #1965772

    Change-Id: I1348704a7a0538f570237e9687f5d770159c1392
    (cherry picked from commit 61a56dc37754ac8daf8be33547b5774728020994)

tags: added: in-stable-wallaby
tags: added: in-stable-xena
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to ovn-octavia-provider (stable/xena)

Reviewed: https://review.opendev.org/c/openstack/ovn-octavia-provider/+/844013
Committed: https://opendev.org/openstack/ovn-octavia-provider/commit/620662d3d3dfff3eed22bf96dd634e62fe0a543d
Submitter: "Zuul (22348)"
Branch: stable/xena

commit 620662d3d3dfff3eed22bf96dd634e62fe0a543d
Author: Fernando Royo <email address hidden>
Date: Fri Apr 22 13:30:59 2022 +0200

    Fix request to OVN NB DB API

    Patch [1] introduces HM to OVN Octavia provider, by error it
    makes an wrong request to OVN NB DB API when a health_monitor
    event is received, in order to search the Load Balancer related
    with the event received. The request is done call using data from
    members related and the protocol (it is an array).

    This patch fix that request to be well formed.

    [1] https://review.opendev.org/c/openstack/ovn-octavia-provider/+/801890

    Partial-Bug: #1965772

    Change-Id: I1348704a7a0538f570237e9687f5d770159c1392
    (cherry picked from commit 61a56dc37754ac8daf8be33547b5774728020994)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Reviewed: https://review.opendev.org/c/openstack/ovn-octavia-provider/+/844261
Committed: https://opendev.org/openstack/ovn-octavia-provider/commit/b2c2651ecb747af0b6e31adb847c6a590d690f41
Submitter: "Zuul (22348)"
Branch: stable/xena

commit b2c2651ecb747af0b6e31adb847c6a590d690f41
Author: Fernando Royo <email address hidden>
Date: Wed May 25 14:21:20 2022 +0200

    Fix way of calculate LB status after HM event

    Function _get_new_operating_statuses was calculating the
    LB status based on the received event together with
    current operating_status of the members, those values
    were not being stored and therefore were not available
    for the calculation.

    Now those member status are stored in the external_ids
    field related to the LB under the neutron:member_status tag,
    where the uuid of the member and its current status are
    stored.

    This way we can calculate the real-time status of the whole
    LB hierarchy by looking at the values stored there, which
    are updated when HM events are received.

    Partial-Bug: #1965772

    Depends-On: https://review.opendev.org/c/openstack/ovn-octavia-provider/+/844013

    Change-Id: I5f5225a94a9a8401d350d2fda987bf68869def22
    (cherry picked from commit c478567b4efb669dfb429fb4f3e3bbdb41c622f7)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to ovn-octavia-provider (stable/wallaby)

Reviewed: https://review.opendev.org/c/openstack/ovn-octavia-provider/+/844262
Committed: https://opendev.org/openstack/ovn-octavia-provider/commit/1d7353352db5a890c9418e9620a690341b0d9add
Submitter: "Zuul (22348)"
Branch: stable/wallaby

commit 1d7353352db5a890c9418e9620a690341b0d9add
Author: Fernando Royo <email address hidden>
Date: Wed May 25 14:21:20 2022 +0200

    Fix way of calculate LB status after HM event

    Function _get_new_operating_statuses was calculating the
    LB status based on the received event together with
    current operating_status of the members, those values
    were not being stored and therefore were not available
    for the calculation.

    Now those member status are stored in the external_ids
    field related to the LB under the neutron:member_status tag,
    where the uuid of the member and its current status are
    stored.

    This way we can calculate the real-time status of the whole
    LB hierarchy by looking at the values stored there, which
    are updated when HM events are received.

    Partial-Bug: #1965772

    Depends-On: https://review.opendev.org/c/openstack/ovn-octavia-provider/+/844014

    Change-Id: I5f5225a94a9a8401d350d2fda987bf68869def22
    (cherry picked from commit c478567b4efb669dfb429fb4f3e3bbdb41c622f7)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to ovn-octavia-provider (stable/yoga)

Reviewed: https://review.opendev.org/c/openstack/ovn-octavia-provider/+/844260
Committed: https://opendev.org/openstack/ovn-octavia-provider/commit/1a9196b38b5d8527b76b9c85b1e6736b8156e2b1
Submitter: "Zuul (22348)"
Branch: stable/yoga

commit 1a9196b38b5d8527b76b9c85b1e6736b8156e2b1
Author: Fernando Royo <email address hidden>
Date: Wed May 25 14:21:20 2022 +0200

    Fix way of calculate LB status after HM event

    Function _get_new_operating_statuses was calculating the
    LB status based on the received event together with
    current operating_status of the members, those values
    were not being stored and therefore were not available
    for the calculation.

    Now those member status are stored in the external_ids
    field related to the LB under the neutron:member_status tag,
    where the uuid of the member and its current status are
    stored.

    This way we can calculate the real-time status of the whole
    LB hierarchy by looking at the values stored there, which
    are updated when HM events are received.

    Partial-Bug: #1965772

    Depends-On: https://review.opendev.org/c/openstack/ovn-octavia-provider/+/844012

    Change-Id: I5f5225a94a9a8401d350d2fda987bf68869def22
    (cherry picked from commit c478567b4efb669dfb429fb4f3e3bbdb41c622f7)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to ovn-octavia-provider (master)

Reviewed: https://review.opendev.org/c/openstack/ovn-octavia-provider/+/844283
Committed: https://opendev.org/openstack/ovn-octavia-provider/commit/7db9e23fd9642eacb78731bf215a4820978436ca
Submitter: "Zuul (22348)"
Branch: master

commit 7db9e23fd9642eacb78731bf215a4820978436ca
Author: Fernando Royo <email address hidden>
Date: Wed Jun 1 13:46:09 2022 +0200

    Apply ServiceMonitorEvent to affected LBs

    Every Health_Monitor over a LB creates Load_Balancer_Health_Check
    NB entry, and it creates 1..n Service_Monitor OVN SB entries according
    to the member to be monitored. When more than one LB exists on the
    same network segment for the same member, Service_Monitor SB
    entries are not duplicated, in order to not performing multiple
    checks on the same member.

    When a ServiceMonitorUpdateEvent is received with status information
    from a member, the LBs that match the event information (protocol,
    ip, port and logical_port) are searched. At this way we ensure that
    network segments are separated where the same ip and port could match
    but the logical_port would be different.

    The current logic will update just the status of the first LB selected
    as "related" to the member event received from the Service_Monitor,
    but we could use this information to keep updated all those LBs, that
    are in the same network segment and have a Health_Monitor associated
    to.

    This patch will fix this behaviour in order to provide status to
    Octavia of all LBs affected by a ServiceMonitorUpdateEvent.

    Partial-Bug: #1965772
    Closes-Bug: #1965772
    Change-Id: I7c75003516015863320e53f0175dec8fdf4e2cf7

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to ovn-octavia-provider (stable/yoga)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to ovn-octavia-provider (stable/xena)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to ovn-octavia-provider (stable/wallaby)

Fix proposed to branch: stable/wallaby
Review: https://review.opendev.org/c/openstack/ovn-octavia-provider/+/846426

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to ovn-octavia-provider (stable/xena)

Reviewed: https://review.opendev.org/c/openstack/ovn-octavia-provider/+/846425
Committed: https://opendev.org/openstack/ovn-octavia-provider/commit/b006d5273dcfa109e4d1524ec144781d79174b7f
Submitter: "Zuul (22348)"
Branch: stable/xena

commit b006d5273dcfa109e4d1524ec144781d79174b7f
Author: Fernando Royo <email address hidden>
Date: Wed Jun 1 13:46:09 2022 +0200

    Apply ServiceMonitorEvent to affected LBs

    Every Health_Monitor over a LB creates Load_Balancer_Health_Check
    NB entry, and it creates 1..n Service_Monitor OVN SB entries according
    to the member to be monitored. When more than one LB exists on the
    same network segment for the same member, Service_Monitor SB
    entries are not duplicated, in order to not performing multiple
    checks on the same member.

    When a ServiceMonitorUpdateEvent is received with status information
    from a member, the LBs that match the event information (protocol,
    ip, port and logical_port) are searched. At this way we ensure that
    network segments are separated where the same ip and port could match
    but the logical_port would be different.

    The current logic will update just the status of the first LB selected
    as "related" to the member event received from the Service_Monitor,
    but we could use this information to keep updated all those LBs, that
    are in the same network segment and have a Health_Monitor associated
    to.

    This patch will fix this behaviour in order to provide status to
    Octavia of all LBs affected by a ServiceMonitorUpdateEvent.

    Partial-Bug: #1965772
    Closes-Bug: #1965772
    Change-Id: I7c75003516015863320e53f0175dec8fdf4e2cf7
    (cherry picked from commit 7db9e23fd9642eacb78731bf215a4820978436ca)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to ovn-octavia-provider (stable/wallaby)

Reviewed: https://review.opendev.org/c/openstack/ovn-octavia-provider/+/846426
Committed: https://opendev.org/openstack/ovn-octavia-provider/commit/5b1d0bec5a8c6bc7d3291459f8becf16b10a3ed7
Submitter: "Zuul (22348)"
Branch: stable/wallaby

commit 5b1d0bec5a8c6bc7d3291459f8becf16b10a3ed7
Author: Fernando Royo <email address hidden>
Date: Wed Jun 1 13:46:09 2022 +0200

    Apply ServiceMonitorEvent to affected LBs

    Every Health_Monitor over a LB creates Load_Balancer_Health_Check
    NB entry, and it creates 1..n Service_Monitor OVN SB entries according
    to the member to be monitored. When more than one LB exists on the
    same network segment for the same member, Service_Monitor SB
    entries are not duplicated, in order to not performing multiple
    checks on the same member.

    When a ServiceMonitorUpdateEvent is received with status information
    from a member, the LBs that match the event information (protocol,
    ip, port and logical_port) are searched. At this way we ensure that
    network segments are separated where the same ip and port could match
    but the logical_port would be different.

    The current logic will update just the status of the first LB selected
    as "related" to the member event received from the Service_Monitor,
    but we could use this information to keep updated all those LBs, that
    are in the same network segment and have a Health_Monitor associated
    to.

    This patch will fix this behaviour in order to provide status to
    Octavia of all LBs affected by a ServiceMonitorUpdateEvent.

    Partial-Bug: #1965772
    Closes-Bug: #1965772
    Change-Id: I7c75003516015863320e53f0175dec8fdf4e2cf7
    (cherry picked from commit 7db9e23fd9642eacb78731bf215a4820978436ca)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to ovn-octavia-provider (stable/yoga)

Reviewed: https://review.opendev.org/c/openstack/ovn-octavia-provider/+/846423
Committed: https://opendev.org/openstack/ovn-octavia-provider/commit/c1f421615ca4fd7bcf6ce8e4d386c90b72446c89
Submitter: "Zuul (22348)"
Branch: stable/yoga

commit c1f421615ca4fd7bcf6ce8e4d386c90b72446c89
Author: Fernando Royo <email address hidden>
Date: Wed Jun 1 13:46:09 2022 +0200

    Apply ServiceMonitorEvent to affected LBs

    Every Health_Monitor over a LB creates Load_Balancer_Health_Check
    NB entry, and it creates 1..n Service_Monitor OVN SB entries according
    to the member to be monitored. When more than one LB exists on the
    same network segment for the same member, Service_Monitor SB
    entries are not duplicated, in order to not performing multiple
    checks on the same member.

    When a ServiceMonitorUpdateEvent is received with status information
    from a member, the LBs that match the event information (protocol,
    ip, port and logical_port) are searched. At this way we ensure that
    network segments are separated where the same ip and port could match
    but the logical_port would be different.

    The current logic will update just the status of the first LB selected
    as "related" to the member event received from the Service_Monitor,
    but we could use this information to keep updated all those LBs, that
    are in the same network segment and have a Health_Monitor associated
    to.

    This patch will fix this behaviour in order to provide status to
    Octavia of all LBs affected by a ServiceMonitorUpdateEvent.

    Partial-Bug: #1965772
    Closes-Bug: #1965772
    Change-Id: I7c75003516015863320e53f0175dec8fdf4e2cf7
    (cherry picked from commit 7db9e23fd9642eacb78731bf215a4820978436ca)

Changed in neutron:
status: In Progress → Fix Released
assignee: nobody → Fernando Royo (froyoredhat)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/ovn-octavia-provider 1.0.1

This issue was fixed in the openstack/ovn-octavia-provider 1.0.1 release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/ovn-octavia-provider 3.0.0.0rc1

This issue was fixed in the openstack/ovn-octavia-provider 3.0.0.0rc1 release candidate.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/ovn-octavia-provider 1.2.0

This issue was fixed in the openstack/ovn-octavia-provider 1.2.0 release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/ovn-octavia-provider 2.1.0

This issue was fixed in the openstack/ovn-octavia-provider 2.1.0 release.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.