Vmware: Failed to snapshot an instance with a big root disk.

Bug #1495429 reported by Kevin Tibi
26
This bug affects 5 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Fix Released
High
Eric Brown
oslo.vmware
Fix Released
High
Eric Brown

Bug Description

python-nova-2015.1.1-1.el7.noarch
openstack-nova-common-2015.1.1-1.el7.noarch
python-novaclient-2.23.0-1.el7.noarch
openstack-nova-compute-2015.1.1-1.el7.noarch
python-oslo-vmware-0.11.1-1.el7.noarch

I can't snap an instance if the root disk is too large. (>8GB)

Snap in Vcenter works, OVF export works, DL image on glance node works, but after the DL, compute have trace and delete glance image.

I can see in vcenter, during upload, export OVF timeout.

So I guess when nova make request with OVF (deploy or export) and if the transfert is too long, Vcenter delete OSTACK_IMG or OSTACK_SNAP and nova bug.

Trace in compute.log ==>

2015-09-14 10:46:00.003 10248 DEBUG oslo_vmware.api [-] Fault list: [ManagedObjectNotFound] _invoke_api /usr/lib/python2.7/site-packages/oslo_vmware/api.py:326
2015-09-14 10:46:00.004 10248 DEBUG oslo_vmware.exceptions [-] Fault ManagedObjectNotFound not matched. get_fault_class /usr/lib/python2.7/site-packages/oslo_vmware/exceptions.py:250
2015-09-14 10:46:00.004 10248 DEBUG nova.virt.vmwareapi.vm_util [req-5ce7f3a7-5db7-4157-b4ae-212b585b586a 3e014852e6e642d4a11600f2d453324c eb151dcad08b434ab919a47392da4c95 - - -] [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] Destroying the VM destroy_vm /usr/lib/python2.7/site-packages/nova/virt/vmwareapi/vm_util.py:1304
2015-09-14 10:46:00.004 10248 DEBUG oslo_vmware.api [req-5ce7f3a7-5db7-4157-b4ae-212b585b586a 3e014852e6e642d4a11600f2d453324c eb151dcad08b434ab919a47392da4c95 - - -] Waiting for function _invoke_api to return. func /usr/lib/python2.7/site-packages/oslo_vmware/api.py:121
2015-09-14 10:46:00.028 10248 DEBUG oslo_vmware.api [req-5ce7f3a7-5db7-4157-b4ae-212b585b586a 3e014852e6e642d4a11600f2d453324c eb151dcad08b434ab919a47392da4c95 - - -] Waiting for the task: (returnval){
2015-09-14 10:46:00.028 10248 DEBUG oslo_vmware.api [-] Invoking VIM API to read info of task: (returnval){
2015-09-14 10:46:00.029 10248 DEBUG oslo_vmware.api [-] Waiting for function _invoke_api to return. func /usr/lib/python2.7/site-packages/oslo_vmware/api.py:121
2015-09-14 10:46:05.029 10248 DEBUG oslo_vmware.api [-] Invoking VIM API to read info of task: (returnval){
2015-09-14 10:46:05.030 10248 DEBUG oslo_vmware.api [-] Waiting for function _invoke_api to return. func /usr/lib/python2.7/site-packages/oslo_vmware/api.py:121
2015-09-14 10:46:05.056 10248 DEBUG oslo_vmware.api [-] Task: (returnval){
2015-09-14 10:46:05.056 10248 INFO nova.virt.vmwareapi.vm_util [req-5ce7f3a7-5db7-4157-b4ae-212b585b586a 3e014852e6e642d4a11600f2d453324c eb151dcad08b434ab919a47392da4c95 - - -] [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] Destroyed the VM
2015-09-14 10:46:05.056 10248 DEBUG nova.virt.vmwareapi.vmops [req-5ce7f3a7-5db7-4157-b4ae-212b585b586a 3e014852e6e642d4a11600f2d453324c eb151dcad08b434ab919a47392da4c95 - - -] [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] Deleting Snapshot of the VM instance _delete_vm_snapshot /usr/lib/python2.7/site-packages/nova/virt/vmwareapi/vmops.py:759
2015-09-14 10:46:05.057 10248 DEBUG oslo_vmware.api [req-5ce7f3a7-5db7-4157-b4ae-212b585b586a 3e014852e6e642d4a11600f2d453324c eb151dcad08b434ab919a47392da4c95 - - -] Waiting for function _invoke_api to return. func /usr/lib/python2.7/site-packages/oslo_vmware/api.py:121
2015-09-14 10:46:05.084 10248 DEBUG oslo_vmware.api [req-5ce7f3a7-5db7-4157-b4ae-212b585b586a 3e014852e6e642d4a11600f2d453324c eb151dcad08b434ab919a47392da4c95 - - -] Waiting for the task: (returnval){
2015-09-14 10:46:05.084 10248 DEBUG oslo_vmware.api [-] Invoking VIM API to read info of task: (returnval){
2015-09-14 10:46:05.085 10248 DEBUG oslo_vmware.api [-] Waiting for function _invoke_api to return. func /usr/lib/python2.7/site-packages/oslo_vmware/api.py:121
2015-09-14 10:46:10.085 10248 DEBUG oslo_vmware.api [-] Invoking VIM API to read info of task: (returnval){
2015-09-14 10:46:10.086 10248 DEBUG oslo_vmware.api [-] Waiting for function _invoke_api to return. func /usr/lib/python2.7/site-packages/oslo_vmware/api.py:121
2015-09-14 10:46:10.105 10248 DEBUG oslo_vmware.api [-] Task: (returnval){
2015-09-14 10:46:10.106 10248 DEBUG nova.virt.vmwareapi.vmops [req-5ce7f3a7-5db7-4157-b4ae-212b585b586a 3e014852e6e642d4a11600f2d453324c eb151dcad08b434ab919a47392da4c95 - - -] [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] Deleted Snapshot of the VM instance _delete_vm_snapshot /usr/lib/python2.7/site-packages/nova/virt/vmwareapi/vmops.py:765
2015-09-14 10:46:10.106 10248 DEBUG nova.compute.manager [req-5ce7f3a7-5db7-4157-b4ae-212b585b586a 3e014852e6e642d4a11600f2d453324c eb151dcad08b434ab919a47392da4c95 - - -] [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] Cleaning up image f17662ad-8627-4323-ab57-b2240ed45b61 decorated_function /usr/lib/python2.7/site-packages/nova/compute/manager.py:397
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] Traceback (most recent call last):
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 393, in decorated_function
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] *args, **kwargs)
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3278, in snapshot_instance
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] task_states.IMAGE_SNAPSHOT)
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3308, in _snapshot_instance
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] update_task_state)
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] File "/usr/lib/python2.7/site-packages/nova/virt/vmwareapi/driver.py", line 506, in snapshot
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] self._vmops.snapshot(context, instance, image_id, update_task_state)
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] File "/usr/lib/python2.7/site-packages/nova/virt/vmwareapi/vmops.py", line 856, in snapshot
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] vmdk_size=vmdk.capacity_in_bytes)
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] File "/usr/lib/python2.7/site-packages/nova/virt/vmwareapi/images.py", line 484, in upload_image_stream_optimized
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] image_meta=image_metadata)
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] File "/usr/lib/python2.7/site-packages/nova/virt/vmwareapi/images.py", line 208, in start_transfer
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] read_file_handle.close()
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] File "/usr/lib/python2.7/site-packages/oslo_vmware/rw_handles.py", line 569, in close
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] 'state')
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] File "/usr/lib/python2.7/site-packages/oslo_vmware/api.py", line 346, in invoke_api
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] return _invoke_api(module, method, *args, **kwargs)
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] File "/usr/lib/python2.7/site-packages/oslo_vmware/api.py", line 122, in func
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] return evt.wait()
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 121, in wait
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] return hubs.get_hub().switch()
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 294, in switch
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] return self.greenlet.switch()
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] File "/usr/lib/python2.7/site-packages/oslo_vmware/common/loopingcall.py", line 123, in _inner
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] idle = self.f(*self.args, **self.kw)
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] File "/usr/lib/python2.7/site-packages/oslo_vmware/api.py", line 95, in _func
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] result = f(*args, **kwargs)
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] File "/usr/lib/python2.7/site-packages/oslo_vmware/api.py", line 329, in _invoke_api
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] raise clazz(six.text_type(excep), excep.details)
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] VMwareDriverException: Cet objet a d\xe9j\xe0 \xe9t\xe9 supprim\xe9 ou n\u2019a pas \xe9t\xe9 enti\xe8rement cr\xe9\xe9
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] Cause: Server raised fault: 'Cet objet a d\xe9j\xe0 \xe9t\xe9 supprim\xe9 ou n\u2019a pas \xe9t\xe9 enti\xe8rement cr\xe9\xe9'
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] Faults: [ManagedObjectNotFound]
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e] Details: {'obj': 'session[f73d9a23-5aee-9deb-9f62-384a5238eca8]52973924-3d33-8ab9-7d95-75afec081b54'}
2015-09-14 10:46:10.106 10248 TRACE nova.compute.manager [instance: 5da44d78-b0cf-44f6-9789-b0fd78906b4e]

Glance log ==>

2015-09-14 10:46:06.628 19221 ERROR glance.registry.client.v1.client [req-08e83cd9-7999-4de9-ab9a-66cb012bfb30 3e014852e6e642d4a11600f2d453324c eb151dcad08b434ab919a47392da4c95 - - -] Registry client request GET /images/f17662ad-8627-4323-ab57-b2240ed45b61 raised NotFound
2015-09-14 10:46:06.628 19221 TRACE glance.registry.client.v1.client Traceback (most recent call last):
2015-09-14 10:46:06.628 19221 TRACE glance.registry.client.v1.client File "/usr/lib/python2.7/site-packages/glance/registry/client/v1/client.py", line 117, in do_request
2015-09-14 10:46:06.628 19221 TRACE glance.registry.client.v1.client **kwargs)
2015-09-14 10:46:06.628 19221 TRACE glance.registry.client.v1.client File "/usr/lib/python2.7/site-packages/glance/common/client.py", line 71, in wrapped
2015-09-14 10:46:06.628 19221 TRACE glance.registry.client.v1.client return func(self, *args, **kwargs)
2015-09-14 10:46:06.628 19221 TRACE glance.registry.client.v1.client File "/usr/lib/python2.7/site-packages/glance/common/client.py", line 376, in do_request
2015-09-14 10:46:06.628 19221 TRACE glance.registry.client.v1.client headers=copy.deepcopy(headers))
2015-09-14 10:46:06.628 19221 TRACE glance.registry.client.v1.client File "/usr/lib/python2.7/site-packages/glance/common/client.py", line 88, in wrapped
2015-09-14 10:46:06.628 19221 TRACE glance.registry.client.v1.client return func(self, method, url, body, headers)
2015-09-14 10:46:06.628 19221 TRACE glance.registry.client.v1.client File "/usr/lib/python2.7/site-packages/glance/common/client.py", line 523, in _do_request
2015-09-14 10:46:06.628 19221 TRACE glance.registry.client.v1.client raise exception.NotFound(res.read())
2015-09-14 10:46:06.628 19221 TRACE glance.registry.client.v1.client NotFound: 404 Not Found
2015-09-14 10:46:06.628 19221 TRACE glance.registry.client.v1.client
2015-09-14 10:46:06.628 19221 TRACE glance.registry.client.v1.client The resource could not be found.
2015-09-14 10:46:06.628 19221 TRACE glance.registry.client.v1.client
2015-09-14 10:46:06.628 19221 TRACE glance.registry.client.v1.client
2015-09-14 10:46:06.628 19221 TRACE glance.registry.client.v1.client

tags: added: snapshot vmware
Kevin Tibi (ktibi)
description: updated
Kevin Tibi (ktibi)
description: updated
Tracy Jones (tjones-i)
Changed in nova:
assignee: nobody → Eric Brown (ericwb)
Changed in oslo.vmware:
assignee: nobody → Eric Brown (ericwb)
Changed in nova:
importance: Undecided → High
Changed in oslo.vmware:
importance: Undecided → High
Revision history for this message
Markus Zoeller (markus_z) (mzoeller) wrote :

@Eric Brown: Did you already had the chance to look at this bug? Do you have all the debug data you need?

Revision history for this message
Kevin Tibi (ktibi) wrote :

News :

I have always this bug. But after more research I can add more data:

steps of bug :

1* nova compute snap instance.
2* glance registry download iamge.
3* During download, vcenter have timeout on export OVF. (object OVF is deleted but not on datastore)
4* End of download, Compute have trace ==> no object to delete in vcenter.
5* Compute launch request to glance ==> delete new image.
6* glance delete image.

I guess maybe we can delete step 5. Because I can see my image before 5 and after 4.

Revision history for this message
Rodrigue DE ALMEIDA (rod2a) wrote :

I confirm, same issue with my infrastructure.
OVF seems to be deleted by vCenter and image disappear from glance listing in Horizon.

Changed in nova:
status: New → Confirmed
Kevin Tibi (ktibi)
Changed in oslo.vmware:
status: New → Confirmed
Revision history for this message
Radoslav Gerganov (rgerganov) wrote :

There are two issues here:

1. Not updating the NFC lease -- this is addressed with https://review.openstack.org/#/c/281134
2. HTTP timeout when uploading the snapshotted image to Glance -- this happens because the OVF exports is taking long time when the root disk is large and HTTP upload to the vSphere datastore times out. The workaround it to increase the HTTP read timeout in vCenter:

Modify the readTimeoutMs setting to 600000 in vpxd.cfg:

<config>
  <vmacore>
    <http>
      <readTimeoutMs>600000</readTimeoutMs>
    </http>
  </vmacore>
..
</config>
Stop and restart vCenter.

Revision history for this message
Dongcan Ye (hellochosen) wrote :

I had also encounter this problem.

This is no error in Nova side, but in VMware log shows export OVF failed, and in glance log shows:
ERROR glance_store._drivers.vmware_datastore [req-8337f775-e066-4ebb-abef-63a7af8c4ced cbdd0dfe4cfc4f29b4e52ade68533103 72c0cad48f4a4b3dbd1c2f4f44e1d00f - - -] Communication error sending http PUT requestto the url /folder/openstack_glance/ed11986d-bca0-4b93-bbba-3d4723df58ef%3FdcPath%3DPublic-Cloud-DC%26dsName%3DPublic-Cloud-Share-Storage.
Got IOError [Errno 32] Broken pipe

Revision history for this message
Dongcan Ye (hellochosen) wrote :

Hi, Radoslav.
This bug only reproduce in vCenter6.0 in our production.
I both using method from https://review.openstack.org/#/c/281134 and modifying vCenter Server conf, but it seems useless.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to nova (stable/mitaka)

Related fix proposed to branch: stable/mitaka
Review: https://review.openstack.org/322681

Revision history for this message
Kevin Tibi (ktibi) wrote :

+1 For review.

Vcenter no timeout with mitaka + vcenter 6.

Revision history for this message
Jay Jahns (jjahns) wrote :

I didn't see this issue manifest with the patch on Mitaka. Without the patch was a different story. I used vsphere backend and Swift and the issue surfaced pre-patch and is gone post-patch.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on nova (stable/mitaka)

Change abandoned by Matt Riedemann (<email address hidden>) on branch: stable/mitaka
Review: https://review.openstack.org/322681
Reason: Cleaning this out of the stable/mitaka backlog since the change on master hasn't merged yet.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to nova (master)

Reviewed: https://review.openstack.org/281134
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=2df83abaa0a5c828421fc38602cc1e5145b46ff4
Submitter: Jenkins
Branch: master

commit 2df83abaa0a5c828421fc38602cc1e5145b46ff4
Author: Radoslav Gerganov <email address hidden>
Date: Wed Feb 17 10:35:59 2016 +0200

    VMware: Refactor the image transfer

    The image transfer is unnecessary complicated and buggy. When
    transferring streamOptimized images we have to update the progress of
    the NFC lease to prevent timeouts.
    This patch replaces the complex usage of blocking queues and threads with
    a simple read+write loop. It has the same performance and the code is
    much cleaner. The NFC lease is updated with the loopingcall utility.

    Closes-Bug: #1546454
    Closes-Bug: #1278690
    Related-Bug: #1495429
    Change-Id: I96e8e0682bcc642a2a5c4b7d2851812bef60d2ff

Revision history for this message
Daniel 'f0o' Preussker (dpreussker) wrote :

Hi,

I'm running on Mikata-Trunk from 23rd September 2016 and still experience the issue.

OVF export times out in the Task-List, however Glance is still pulling data for a while until vCenter ultimately removes the OSTACK_SNAP_* and thus glance removes the image.

Additionally, I see that despite Glance and Nova as well as ESXi are connected via two 10GbE links each, the performance of the snapshot-upload never exceeds 50Mbps.

CPU, Ram and Disk is not the issue as I can easily pull 100Mbps via the Hostclient on my laptop from the same ESX/vCenter. And Glance is well able to push Gbps into the ESX/vCenter.

Any ideas? (or is that unrelated?)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to nova (stable/mitaka)

Reviewed: https://review.openstack.org/322681
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=f60a6d1c9d8d1c8e4bba1db48de070dd5c9b22a8
Submitter: Jenkins
Branch: stable/mitaka

commit f60a6d1c9d8d1c8e4bba1db48de070dd5c9b22a8
Author: Radoslav Gerganov <email address hidden>
Date: Wed Feb 17 10:35:59 2016 +0200

    VMware: Refactor the image transfer

    The image transfer is unnecessary complicated and buggy. When
    transferring streamOptimized images we have to update the progress of
    the NFC lease to prevent timeouts.
    This patch replaces the complex usage of blocking queues and threads with
    a simple read+write loop. It has the same performance and the code is
    much cleaner. The NFC lease is updated with the loopingcall utility.

    Closes-Bug: #1546454
    Closes-Bug: #1278690
    Related-Bug: #1495429
    Change-Id: I96e8e0682bcc642a2a5c4b7d2851812bef60d2ff
    (cherry picked from commit 2df83abaa0a5c828421fc38602cc1e5145b46ff4)

tags: added: in-stable-mitaka
Revision history for this message
Jay Pipes (jaypipes) wrote :

Radoslav and Eric, can we mark this as Fix Released now?

Revision history for this message
Jay Pipes (jaypipes) wrote :

Going to set this to Fix Released since the patch is in both Newton and Mitaka.

Changed in nova:
status: Confirmed → Fix Released
Changed in oslo.vmware:
status: Confirmed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.