BUM tree collapsed with tor-agent stop

Bug #1653479 reported by kalagesan
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Juniper Openstack
Status tracked in Trunk
R3.1
New
High
kalagesan
R3.2
New
High
kalagesan
R4.0
Invalid
High
kalagesan
Trunk
Invalid
High
kalagesan

Bug Description

* version
contrail v3.1.1.0-45
QFX4 v14.1X53-D33
QFX11 v14.1X53-D33.2

* tor-agent and QFX
QFX4(172.23.11.45): tor-agent-4
QFX11(172.23.11.37): tor-agent-11

* topology diagram
[IXIA]===[QFX4]====[VXLAN]====[QFX11]===[IXIA]

customer configured 500 virtual-network on Contrail.
customer applied the traffic to each virtual-network and tested the failure of TSN node.

when our customer stop of tor-agent process by one TSN node(172.23.10.196),
communication disconnection occurred on 76/500VN.

The customer picked up from the communication disconnected VN and confirmed the BUM Tree.
The BUM Tree of the picked up VN was broken.

* BUM Tree status
[The BUM Tree status before stop tor-agent]
via TSN(172.23.10.196)
QFX4-----(multicast:33:33:33:33:33:33)----->QFX11
QFX4<----(broadcast:ff:ff:ff:ff:ff:ff)------QFX11
*500VN

[stop tor-agent in TSN(172.23.10.196)]
-----
root@openc-14:~# date
Thu Dec 15 16:42:03 JST 2016
root@openc-14:~# service contrail-tor-agent-4 stop
date
contrail-tor-agent-4: stopped
root@openc-14:~# date
Thu Dec 15 16:42:04 JST 2016
-----

[Expected behavior]
Change BUM Tree
via TSN(172.23.10.197)-TSN(172.23.10.196)
QFX4-----(multicast:33:33:33:33:33:33)----->QFX11
QFX4<----(broadcast:ff:ff:ff:ff:ff:ff)------QFX11
*500VN

[Pickup VN1]
It is broken BUMTree.

Check TSN(172.23.10.197)
root@openc-15:~# vxlan --get 524
VXLAN Table

VNID NextHop
----------------
524 640
root@openc-15:~# nh --get 640
Id:640 Type:Vrf_Translate Fmly: AF_INET Rid:0 Ref_cnt:2 Vrf:1978
Flags:Valid, Vxlan, Unicast Flood,
Vrf:1978

root@openc-15:~# rt --family bridge --dump 1978
Flags: L=Label Valid, Df=DHCP flood
vRouter bridge table 0/1978
Index DestMac Flags Label/VNID Nexthop
51356 90:1b:e:52:bb:ea Df - 3
296020 30:20:0:8:0:3 - 1
367936 ff:ff:ff:ff:ff:ff LDf 524 13327
909752 30:20:0:8:0:4 - 1
974164 30:48:4:0:0:8 LDf 524 16
root@openc-15:~# nh --get 13327
Id:13327 Type:Composite Fmly:AF_BRIDGE Rid:0 Ref_cnt:3 Vrf:1978
Flags:Valid, Multicast, L2,
Sub NH(label): 13318(0) 13040(0)

Id:13318 Type:Composite Fmly: AF_INET Rid:0 Ref_cnt:2 Vrf:1978
Flags:Valid, Tor,
Sub NH(label): 16(524)

Id:16 Type:Tunnel Fmly: AF_INET Rid:0 Ref_cnt:41360 Vrf:0
Flags:Valid, Vxlan,
Oif:0 Len:14 Flags Valid, Vxlan, Data:5c 5e ab 03 57 f0 90 1b 0e 52 bb ea 08 00
Vrf:0 Sip:172.23.10.197 Dip:172.23.11.45 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<QFX4

Id:13040 Type:Composite Fmly: AF_INET Rid:0 Ref_cnt:2 Vrf:1978
Flags:Valid, Evpn,
Sub NH(label):

There is a path of QFX4.
However,there is no path to other TSN node(172.23.10.196).

[Pickup VN2]
It is normal status.

Check TSN(172.23.10.197)
root@openc-15:~# vxlan --get 518
VXLAN Table

VNID NextHop
----------------
518 1302
root@openc-15:~# nh --get 1302
Id:1302 Type:Vrf_Translate Fmly: AF_INET Rid:0 Ref_cnt:2 Vrf:2448
Flags:Valid, Vxlan, Unicast Flood,
Vrf:2448

root@openc-15:~# rt --family bridge --dump 2448
Flags: L=Label Valid, Df=DHCP flood
vRouter bridge table 0/2448
Index DestMac Flags Label/VNID Nexthop
59857 30:20:0:2:0:4 - 1
249460 30:48:4:0:0:2 LDf 518 16
344168 30:48:8:0:0:2 LDf 518 17
446440 30:20:0:2:0:3 - 1
522112 ff:ff:ff:ff:ff:ff LDf 518 12521
722868 90:1b:e:52:bb:ea Df - 3
root@openc-15:~# nh --get 12521
Id:12521 Type:Composite Fmly:AF_BRIDGE Rid:0 Ref_cnt:4 Vrf:2448
Flags:Valid, Multicast, L2,
Sub NH(label): 10434(0) 12254(0) 4507(0)

Id:10434 Type:Composite Fmly: AF_INET Rid:0 Ref_cnt:2 Vrf:2448
Flags:Valid, Tor,
Sub NH(label): 16(518)

Id:16 Type:Tunnel Fmly: AF_INET Rid:0 Ref_cnt:41360 Vrf:0
Flags:Valid, Vxlan,
Oif:0 Len:14 Flags Valid, Vxlan, Data:5c 5e ab 03 57 f0 90 1b 0e 52 bb ea 08 00
Vrf:0 Sip:172.23.10.197 Dip:172.23.11.45 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<QFX4

Id:12254 Type:Composite Fmly: AF_INET Rid:0 Ref_cnt:2 Vrf:2448
Flags:Valid, Evpn,
Sub NH(label):

Id:4507 Type:Composite Fmly: AF_INET Rid:0 Ref_cnt:2 Vrf:2448
Flags:Valid, Fabric,
Sub NH(label): 1584(191914)

Id:1584 Type:Tunnel Fmly: AF_INET Rid:0 Ref_cnt:2047 Vrf:0
Flags:Valid, MPLSoGRE,
Oif:0 Len:14 Flags Valid, MPLSoGRE, Data:90 1b 0e 44 2c b5 90 1b 0e 52 bb ea 08 00
Vrf:0 Sip:172.23.10.197 Dip:172.23.10.196 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< Connection between TSN and TSN

Check TSN(172.23.10.196)
root@openc-14:~# vxlan --get 518
VXLAN Table

VNID NextHop
----------------
518 12002
root@openc-14:~# nh --get 12002
Id:12002 Type:Vrf_Translate Fmly: AF_INET Rid:0 Ref_cnt:2 Vrf:483
Flags:Valid, Vxlan, Unicast Flood,
Vrf:483

root@openc-14:~# rt --family bridge --dump 483
Flags: L=Label Valid, Df=DHCP flood
vRouter bridge table 0/483
Index DestMac Flags Label/VNID Nexthop
25892 90:1b:e:44:2c:b5 Df - 3
66856 30:48:4:0:0:2 LDf 518 15
153520 ff:ff:ff:ff:ff:ff LDf 518 17808
289320 30:20:0:2:0:4 - 1
315000 30:20:0:2:0:3 - 1
752656 0:0:5e:0:1:0 Df - 3
785920 30:48:8:0:0:2 LDf 518 16
root@openc-14:~# nh --get 17808
Id:17808 Type:Composite Fmly:AF_BRIDGE Rid:0 Ref_cnt:4 Vrf:483
Flags:Valid, Multicast, L2,
Sub NH(label): 17803(0) 13643(0) 8469(0)

Id:17803 Type:Composite Fmly: AF_INET Rid:0 Ref_cnt:2 Vrf:483
Flags:Valid, Tor,
Sub NH(label): 16(518)

Id:16 Type:Tunnel Fmly: AF_INET Rid:0 Ref_cnt:1506 Vrf:0
Flags:Valid, Vxlan,
Oif:0 Len:14 Flags Valid, Vxlan, Data:5c 5e ab 03 57 f0 90 1b 0e 44 2c b5 08 00
Vrf:0 Sip:172.23.10.196 Dip:172.23.11.37 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<QFX11

Id:13643 Type:Composite Fmly: AF_INET Rid:0 Ref_cnt:2 Vrf:483
Flags:Valid, Evpn,
Sub NH(label):

Id:8469 Type:Composite Fmly: AF_INET Rid:0 Ref_cnt:2 Vrf:483
Flags:Valid, Fabric,
Sub NH(label): 1555(190816)

Id:1555 Type:Tunnel Fmly: AF_INET Rid:0 Ref_cnt:2047 Vrf:0
Flags:Valid, MPLSoGRE,
Oif:0 Len:14 Flags Valid, MPLSoGRE, Data:90 1b 0e 52 bb ea 90 1b 0e 44 2c b5 08 00
Vrf:0 Sip:172.23.10.196 Dip:172.23.10.197 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< Connection between TSN and TSN

Customer tried recover of this issue.
Recorvery step is below.
1. Stop and start TSN(172.23.10.196)'s contrail-tor-agent-4
-> Not recover. The BUM Tree is broken yet.

2. Restart TSN(172.23.10.196)'s supervisor-vrouter
-> This issue recoverd. The BUM Tree is normal.

customer like to understand root cause about this issue from the logs provided

Below files are collected from customer and its available in my local log server:

root@10.219.48.123, pwd:Jtaclab123

fileupload directory path:/home/kannan/bumtree

Contrail:
testbed.py
under /var/log/contrail logs.
gcore file of tor-agent process.

QFX:
RSI
under /var/log archive
VN information on communication disconnection

kalagesan (kalagesan)
tags: added: qfx
Jeba Paulaiyan (jebap)
tags: added: vrouter
kalagesan (kalagesan)
description: updated
tags: added: nttc
tags: added: vrouter3.1-45
Revision history for this message
kalagesan (kalagesan) wrote :

Hi Manish,

1. Topology file before and after issue is attached as requested.

2. Also what was the multicast replicator shown on QFX4 and QFX11 for working and non-working VN mentioned in bug. It is shown under ovsdb mac-table.
gcv@QFX4> show ovsdb mac logical-switch Contrail-e91fbe02-9a5d-457c-91f2-70693558f810
Logical Switch Name: Contrail-e91fbe02-9a5d-457c-91f2-70693558f810
Mac IP Encapsulation Vtep
Address Address Address
ff:ff:ff:ff:ff:ff 0.0.0.0 Vxlan over Ipv4 172.23.11.45
30:48:04:00:00:01 0.0.0.0 Vxlan over Ipv4 172.23.11.45
30:48:08:00:00:01 0.0.0.0 Vxlan over Ipv4 172.23.11.37
ff:ff:ff:ff:ff:ff 0.0.0.0 Vxlan over Ipv4 172.23.10.197

gcv@QFX4> show ethernet-switching table vlan-name Contrail-e91fbe02-9a5d-457c-91f2-70693558f810

MAC flags (S - static MAC, D - dynamic MAC, L - locally learned, P - Persistent static
SE - statistics enabled, NM - non configured MAC, R - remote PE MAC, O - ovsdb MAC)

Ethernet switching table : 2 entries, 2 learned
Routing instance : default-switch
Vlan MAC MAC Age Logical
name address flags interface
Contrail-e91fbe02-9a5d-457c-91f2-70693558f810 30:48:04:00:00:01 D - ae5.3001
Contrail-e91fbe02-9a5d-457c-91f2-70693558f810 30:48:08:00:00:01 DO - vtep.32771

monitor traffic interface result
xe-1/0/44 Up 51550636 (0) 51483032 (0)
xe-1/0/45 Up 51483032 (0) 51550636 (0)
xe-1/0/46 Up 526334340 (0) 525342139 (0)
xe-1/0/47 Up 626666 (2) 717271 (3)
et-1/0/48 Up 1464749420 (1) 1946185315 (0)
et-1/0/49 Up 1478339961 (0) 1942244611 (0)
xe-1/0/50:0 Up 299845 (1) 126177869 (0)
xe-1/0/50:1 Up 0 (0) 0 (0)
xe-1/0/50:2 Up 439525 (0) 133091177 (1000) <<<<<<< to control plan (to TSN)

It seems that mac is learning correctly and QFX transefer traffic to TSN.
However,when this issue occured, Non-working VN(broken BUM Tree) could not Multicast communication.

Regards,
Kannan

Revision history for this message
kalagesan (kalagesan) wrote :
Revision history for this message
kalagesan (kalagesan) wrote :
Revision history for this message
vivekananda shenoy (vshenoy83) wrote :

Hi Manish,

Any updates on this issue ? Was the info shared by Kannan regarding the BUM tree state before and after the trigger helpful ?

Regards,
Vivek

Revision history for this message
kalagesan (kalagesan) wrote :

Hi Manish,

Customer has reproduced the problem and collected requested information.

1)Multicast tree before/after issue
2)Multicast route (introspect) before/after
3)Agent control node xmpp channel flap count before/after.
4)Cores of all TSN ( contrail-vroute process) and TA ( contrail-tor agent) , i.e for both active and backup
5)time stamp of issue replication

All logs are uploaded in my log server 10.219.48.123 root/Jtaclab123
log upload path:

/home/kannan/bumtree/newlogs/To_JTAC_20170223

[root@LocalStorage To_JTAC_20170223]# ls -lrh
total 813M
-rw-r--r--. 1 root root 213M Feb 23 2017 openc35-vrouter-core.zip
-rw-r--r--. 1 root root 104M Feb 23 2017 openc35-tor23-core.zip
-rw-r--r--. 1 root root 70M Feb 23 2017 openc35-tor11-core.zip
-rw-r--r--. 1 root root 234M Feb 23 2017 openc34-vrouter-core.zip
-rw-r--r--. 1 root root 135M Feb 23 2017 openc34-tor11-core.zip
-rw-r--r--. 1 root root 56K Feb 23 2017 bum-collapsed-20170221.pptx
drwxr-xr-x. 8 root root 4.0K Feb 23 10:42 before
drwxr-xr-x. 8 root root 4.0K Feb 23 10:42 after
-rw-r--r--. 1 root root 33K Feb 23 2017 20170221_This issue reproduce time line.txt
-rw-r--r--. 1 root root 24M Feb 21 14:30 20170221_openc35_contrail_log.tar.gz
-rw-r--r--. 1 root root 35M Feb 21 14:30 20170221_openc34_contrail_log.tar.gz

The customer assumes that the BUM Tree between the TSNs has already disconneted before the issue occurs,
The customer made that state and stopped the tor-agent.
As a result, this issue has been reproduced.

About detail issue replication step, you can refer "bum-collapsed-20170221.pptx" and "20170221_This issue reproduce time line.txt".

Customers are concerned about the following.
# The customer are concerned about what is the root cause
  - Whether convergence of BUM tree was delayed when ovsdb connection switched
  or
  - Not connected the BUM Tree between TSNs as the first state.(The state before the problem assumed by the customer occurred)
  or other than that

Regards,
Kannan

Revision history for this message
Hari Prasad Killi (haripk) wrote :

Hey Kannan,

In both the TSN max labels to be used is not configured as per scale. Its 5120. CN shows that its out of label for the VRF (default_test).
Please refer this: https://github.com/Juniper/contrail-controller/wiki/Vrouter-Module-Parameters

Here are the inputs from logs sent:
In CN, no labels for TSN1 and TSN2.

address label_block label router_id <br>
172.23.0.71 4098-4608 0 0.0.0.0 <br>
172.23.0.72 4098-4608 0 0.0.0.0 <br>

On TSN with router_id_ = <<172.23.0.72> >,
vrouter_max_labels_ = 5120,

On TSN with router_id_ = <<172.23.0.71> >,
vrouter_max_labels_ = 5120,

Also this time they collected tor-agent introspect and named it as TSN agent introspect output which was confusing. Similar kind of goof up was there last time also.
Please ask them to cross check once.

Thanks,
Manish

Revision history for this message
Hari Prasad Killi (haripk) wrote :

The label limit needs to be provisioned appropriately.

Revision history for this message
kalagesan (kalagesan) wrote :
Download full text (3.9 KiB)

Hi Manish,

CTC reproduce this issue again in their lab.

Customer also reproduce this issue again in customer lab and captured logs as requested.

This time CTC tuned the vrouter parameters in their lab environment.
MPLS Labels limit is 11520

Both partner and customer has captured logs we requested.

- 2016-1216-0321_customer_lab(20170330).zip
- 2016-1216-0321_ctc_lab(20170331).zip

Please check it.

[root@LocalStorage march31stlogs]# pwd
/home/kannan/bumtree/march31stlogs

[root@LocalStorage march31stlogs]# ls -ltr
total 8
drwxr-xr-x. 4 root root 4096 Apr 3 06:10 2016-1216-0321_customer_lab(20170330)
drwxr-xr-x. 4 root root 4096 Apr 3 06:12 2016-1216-0321_ctc_lab(20170331)

[root@LocalStorage core]# pwd
/home/kannan/bumtree/march31stlogs/2016-1216-0321_customer_lab(20170330)/core

[root@LocalStorage 2016-1216-0321_customer_lab(20170330)]# ls -ltr
total 905388
-rw-r--r--. 1 root root 10179 Mar 30 17:14 openc-35-cli.log
-rw-r--r--. 1 root root 304248949 Mar 30 17:18 20170330_openc31_contrail.tar.gz
-rw-r--r--. 1 root root 309476163 Mar 30 17:20 20170330_openc32_contrail.tar.gz
-rw-r--r--. 1 root root 231357084 Mar 30 17:21 20170330_openc33_contrail.tar.gz
-rw-r--r--. 1 root root 43252569 Mar 30 17:24 20170330_openc34_contrail.tar.gz
-rw-r--r--. 1 root root 38739337 Mar 30 17:24 20170330_openc35_contrail.tar.gz
-rw-r--r--. 1 root root 10041 Mar 30 17:26 openc-34-cli.log
-rw-r--r--. 1 root root 2681 Mar 31 13:59 Timeline.txt
drwxr-xr-x. 2 root root 4096 Apr 3 06:10 core
drwxr-xr-x. 16 root root 4096 Apr 3 06:10 introspect

[root@LocalStorage 2016-1216-0321_customer_lab(20170330)]# cd core
[root@LocalStorage core]# ls -ltr
total 1720708
-rw-r--r--. 1 root root 268844563 Mar 31 13:27 ctrl-openc-31-core.tar.gz
-rw-r--r--. 1 root root 427999821 Mar 31 13:33 ctrl-openc-32-core.tar.gz
-rw-r--r--. 1 root root 34831216 Mar 31 13:40 tor-agent-6-openc-34-core.tar.gz
-rw-r--r--. 1 root root 23189548 Mar 31 13:41 tor-agent-6-openc-35-core.tar.gz
-rw-r--r--. 1 root root 58727048 Mar 31 13:43 tor-agent-23-openc-35-core.tar.gz
-rw-r--r--. 1 root root 247422150 Mar 31 13:44 vrouter-openc-34-core.tar.gz
-rw-r--r--. 1 root root 177778329 Mar 31 13:50 vrouter-openc-35-core.tar.gz
-rw-r--r--. 1 root root 523189381 Mar 31 13:59 ctrl-openc-33-core.tar.gz

 Logs from customer setup:

 [root@LocalStorage 2016-1216-0321_customer_lab(20170330)]# pwd
/home/kannan/bumtree/march31stlogs/2016-1216-0321_customer_lab(20170330)

 [root@LocalStorage 2016-1216-0321_customer_lab(20170330)]# ls -ltr
total 905388
-rw-r--r--. 1 root root 10179 Mar 30 17:14 openc-35-cli.log
-rw-r--r--. 1 root root 304248949 Mar 30 17:18 20170330_openc31_contrail.tar.gz
-rw-r--r--. 1 root root 309476163 Mar 30 17:20 20170330_openc32_contrail.tar.gz
-rw-r--r--. 1 root root 231357084 Mar 30 17:21 20170330_openc33_contrail.tar.gz
-rw-r--r--. 1 root root 43252569 Mar 30 17:24 20170330_openc34_contrail.tar.gz
-rw-r--r--. 1 root root 38739337 Mar 30 17:24 20170330_openc35_contrail.tar.gz
-rw-r--r--. 1 root root 10041 Mar 30 17:26 openc-34-cli.log
-rw-r--r--. 1 root root 2681 Mar 31 13:59 Timeline.txt
drwxr-xr-x. 2 root root...

Read more...

Revision history for this message
kalagesan (kalagesan) wrote :

Logs are in my server root@10.219.48.123, owd:Jtaclab123

path:[root@LocalStorage march31stlogs]# pwd
/home/kannan/bumtree/march31stlogs

Regards,
Kannan

Revision history for this message
Manish Singh (manishs) wrote :

Label exhausted.
Increasing label range fixes issue.

Revision history for this message
kalagesan (kalagesan) wrote :

Manish working on the new set of logs provided from customer & Partner setup

Regards,
Kannan

kalagesan (kalagesan)
information type: Proprietary → Public
Revision history for this message
vivekananda shenoy (vshenoy83) wrote :

Hi Manish,

Any updates on this issue ?

Regards.
Vivek

Revision history for this message
Manish Singh (manishs) wrote :

1000 VRF and 500 labels allocated for multicast. So this will result in no label available for CN to assign. Increase the label limits in vrouter.
https://github.com/Juniper/contrail-controller/wiki/Vrouter-Module-Parameters

Info collected from latest cores given by Kannan.

Revision history for this message
Manish Singh (manishs) wrote :

Second time when same issue was reported, Level1 forwarder(@172.23.0.61) does not have label allocated for 172.23.0.63. Though in 172.23.0.63 there is a label present.
Peering is also present among Control-nodes.

Control-node team is looking in the issue.

trees
group
source
level0_forwarders
level1_forwarders
255.255.255.255
0.0.0.0
level0_forwarders
level1_forwarders
address
label_block
label
router_id
links
address
label_block
label
router_id
links
172.23.0.71
7809-11519
8760
0.0.0.0
links
172.23.0.71
8760-8760
8760
172.23.0.61
links

172.23.0.72
4500-4500
0
172.23.0.63
links

more

trees
group
source
level0_forwarders
255.255.255.255
0.0.0.0
level0_forwarders
address
label_block
label
172.23.0.72
4098-7808
4500

Revision history for this message
Jeba Paulaiyan (jebap) wrote :

As per Vivek, the recent code has possible fix that is deployed in customer site. Waiting for customer feedback. If customer comes back with issue, please re-open with details.

Revision history for this message
Sandeep Sridhar (ssandeep) wrote :

The issue is seen again on 3.1.3-81. The BUM tree was broken. We see the multicast manager is assigning label 0 to certain level 1 forwarder.

<level1_forwarders type="list" identifier="4">
                                        <list type="struct" size="2">
                                                <ShowMulticastForwarder>
                                                        <address type="string" identifier="1">172.23.10.202</address>
                                                        <label_block type="string" identifier="2">129477-129477</label_block>
                                                        <label type="u32" identifier="3">129477</label>
                                                        <router_id type="string" identifier="5">172.23.10.205</router_id>
                                                        <links type="list" identifier="4">
                                                                <list type="struct" size="0">
                                                                </list>
                                                        </links>
                                                </ShowMulticastForwarder>
                                                <ShowMulticastForwarder>
                                                        <address type="string" identifier="1">172.23.10.201</address>
                                                        <label_block type="string" identifier="2">129864-129864</label_block>
                                                        <label type="u32" identifier="3">0</label>
                                                        <router_id type="string" identifier="5">172.23.10.207</router_id>
                                                        <links type="list" identifier="4">
                                                                <list type="struct" size="0">
                                                                </list>
                                                        </links>
                                                </ShowMulticastForwarder>
                                        </list>
                                </level1_forwarders>
 <ShowMulticastForwarder>
                                                        <address type="string" identifier="1">172.23.10.201</address>
                                                        <label_block type="string" identifier="2">130225-130225</label_block>
                                                        <label type="u32" identifier="3">0</label>
                                                        <router_id type="string" identifier="5">172.23.10.207</router_id>
                                                        <links type="list" identifier="4">
                                                                <list type="struct" size="0">
                                                                </list>
                                                        </links>
                                                </ShowMulticastForwarder>

Logs and cores shared with Ananth.

Revision history for this message
vivekananda shenoy (vshenoy83) wrote :

Re-opening the issue as NTTC is seeing this issue in the latest 3.1-85 again.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.