invalid TCP circuit channel instal

Bug #541291 reported by Jeff Hill
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
EPICS Base
Fix Released
High
Jeff Hill

Bug Description

In the process of trying continuous runs of connect-everything-then- disconnect-everything with the archive engine, I've now twice run into a deadlock that appears to be purely inside CAC.
At least two threads, in this example #8 and #5, are waiting for a mutex held by the CAC-UDP thread (#11).
Menwhile, the udpRecvThread is stuck waiting for a lock related to
tcpiiu::installChannel() sendThreadFlushEvent.signal ();

This time around, I've instrumented libCom/osi/os/posix/osdEvent.c.
I added dummy vars to the start and end of epicsEventOSD:

typedef struct epicsEventOSD -{
     int dummy1;
     pthread_mutex_t mutex;
     pthread_cond_t cond;
     int isFull;
     int dummy2;
}epicsEventOSD;

... which I initialize here:

epicsEventId epicsEventCreate(epicsEventInitialState initialState) {
     epicsEventOSD *pevent;
     int status;

     pevent = callocMustSucceed(1,sizeof(*pevent),"epicsEventCreate");
...
     pevent->dummy1 = 1;
     pevent->dummy2 = 2;
     return((epicsEventId)pevent);
}

Then I clobber all with 0xFF here:
void epicsEventDestroy(epicsEventId pevent) { ...
     /* For debugging */
     memset(pevent, 0xFF, sizeof(*pevent));
     free(pevent);
}

In cpiiu::installChannel, the content of "this" looks more or less reasonable.
There's a lot of stuff in there that I have not tried to understand.
But the epicsEvent sendThreadFlushEvent looks iffy:

(gdb) fra 9
#9 0x0000002a965a8c9d in tcpiiu::installChannel (this=0x6b5c98, guard=@0x40103c90, chan=@0x705d20, sidIn=4294967295, typeIn=65535,
countIn=0)
at ../tcpiiu.cpp:1872
1872 this->sendThreadFlushEvent.signal ();
(gdb) print *this->sendThreadFlushEvent->id
$9 = {dummy1 = -1, mutex = {__m_reserved = 2, __m_count = -1, __m_owner = 0xffffffffffffffff, __m_kind = -1, __m_lock = {__status = -1, __spinlock = -1}}, cond = {__c_lock = {__status = -1, __spinlock = -1}, __c_waiting = 0xffffffffffffffff, __padding = '&#248;' <repeats 16 times>, __align = -1}, isFull = -1, dummy2 = -1}
***
That's all 0xFF!

So this looks like the tcpiiu sendThreadFlushEvent has been destroyed while the udpRecvThread still wants to access it.
Of course this would be something that purify might find.
With valgrind, I cannot run the same test, it's just too slow to ever get anywhere.

Ernest, this time I think it would be good to have purify.
But of course we'd need it for at least the 32 and 64bit linux, so I don't know if their licensing even allows that without paying twice.

Jeff, any obvious idea where to look for the destroyer of the sendThreadFlushEvent?
Of course I can't preclude that the engine is somehow causing this, but I don't know where to look, other than slowly adding "dummy" spacers and memset(0xff) to all destructors.

-Kay

Thread 13 (Thread 1074010464 (LWP 18272)):
#0 0x00000032343088da in pthread_cond_wait@@GLIBC_2.3.2 () from / lib64/tls/libpthread.so.0
#1 0x0000002a9671a23d in condWait (condId=0x56c6b0,
mutexId=0x56c688) at ../../../src/libCom/osi/os/posix/osdEvent.c:78
#2 0x0000002a9671a591 in epicsEventWait (pevent=0x56c680) at ../../../src/libCom/osi/os/posix/osdEvent.c:144
#3 0x0000002a967130f5 in epicsEvent::wait (this=0x56c458) at ../../../src/libCom/osi/epicsEvent.cpp:63
#4 0x0000002a9670ff52 in ipAddrToAsciiEnginePrivate::run
(this=0x56c010) at ../../../src/libCom/misc/
ipAddrToAsciiAsynchronous.cpp:244
#5 0x0000002a967113b9 in epicsThreadCallEntryPoint (pPvt=0x56c468) at ../../../src/libCom/osi/epicsThread.cpp:59

Thread 12 (Thread 1074542944 (LWP 18274)):
#0 0x000000323430ad1b in __lll_mutex_lock_wait () from /lib64/tls/ libpthread.so.0
#1 0x000000000056b6d8 in ?? ()
#2 0x0000000000000000 in ?? ()

Thread 11 (Thread 1074809184 (LWP 18287)):
#0 0x000000323430ad1b in __lll_mutex_lock_wait () from /lib64/tls/ libpthread.so.0
#1 0x0000000000705d20 in ?? ()
#2 0x0000000000705d20 in ?? ()
#3 0x0000003234307b04 in pthread_mutex_lock () from /lib64/tls/ libpthread.so.0
#4 0x000000000056b790 in ?? ()
#5 0x0000000040103a70 in ?? ()
#6 0x0000002a96712bb4 in epicsMutex::lock (this=0x6bd348) at ../../../src/libCom/osi/epicsMutex.cpp:213
#7 0x0000002a9671a441 in epicsEventSignal (pevent=0x6bd340) at ../../../src/libCom/osi/os/posix/osdEvent.c:124
#8 0x0000002a967130da in epicsEvent::signal (this=0x6b5ff8) at ../../../src/libCom/osi/epicsEvent.cpp:57
#9 0x0000002a965a8c9d in tcpiiu::installChannel (this=0x6b5c98, guard=@0x40103c90, chan=@0x705d20, sidIn=4294967295, typeIn=65535,
countIn=0) at ../tcpiiu.cpp:1872
#10 0x0000002a9658a3e2 in cac::transferChanToVirtCircuit (this=0x56ba40, cid=9184869, sid=4294967295, typeCode=65535, count=0, minorVersionNumber=11, addr=@0x40103d10, currentTime=@0x40103f50) at ../cac.cpp:552
#11 0x0000002a965a125f in udpiiu::searchRespAction (this=0x725350, msg=@0x7257c8, addr=@0x40103fa0, currentTime=@0x40103f50) at ../
udpiiu.cpp:665
#12 0x0000002a965a16b1 in udpiiu::postMsg (this=0x725350, net_addr=@0x40103fa0, pInBuf=0x7257c8 "\006", blockSize=24,
currentTime=@0x40103f50) at ../udpiiu.cpp:832
#13 0x0000002a965a0cb2 in udpRecvThread::run (this=0x735780) at ../ udpiiu.cpp:380
#14 0x0000002a967113b9 in epicsThreadCallEntryPoint (pPvt=0x7357a0) at ../../../src/libCom/osi/epicsThread.cpp:59

Thread 10 (Thread 1079310688 (LWP 18317)):
#0 0x00000032343088da in pthread_cond_wait@@GLIBC_2.3.2 () from / lib64/tls/libpthread.so.0
#1 0x0000002a9671a23d in condWait (condId=0x5bcab0,
mutexId=0x5bca88) at ../../../src/libCom/osi/os/posix/osdEvent.c:78
#2 0x0000002a9671a591 in epicsEventWait (pevent=0x5bca80) at ../../../src/libCom/osi/os/posix/osdEvent.c:144
#3 0x0000002a96702cd9 in errlogThread () at ../../../src/libCom/
error/errlog.c:468

Thread 9 (Thread 1087519072 (LWP 18355)):
#0 0x00000032338bebe6 in __select_nocancel () from /lib64/tls/libc.so.6
#1 0x000000000041b9d2 in HTTPServer::run (this=0x2a96a36440) at ../
HTTPServer.cpp:154
#2 0x0000002a967113b9 in epicsThreadCallEntryPoint
(pPvt=0x2a96a36448) at ../../../src/libCom/osi/epicsThread.cpp:59

*** Waiting for mutex held by CAC-UDP (thread 11) Thread 8 (Thread 1075865952 (LWP 14383)):
#0 0x000000323430ad1b in __lll_mutex_lock_wait () from /lib64/tls/ libpthread.so.0
#1 0x000000000056b6d8 in ?? ()
#2 0x0000000000000010 in ?? ()
#3 0x0000002a9671a0c1 in epicsMutexOsdLock (pmutex=0x56b750) at ../../../src/libCom/osi/os/posix/osdMutex.c:121
#4 0x0000002a967128f5 in epicsMutexLock (pmutexNode=0x56b790) at ../../../src/libCom/osi/epicsMutex.cpp:125
#5 0x0000002a96712bb4 in epicsMutex::lock (this=0x56b6d8) at ../../../src/libCom/osi/epicsMutex.cpp:213
#6 0x0000002a9621ca3d in epicsGuard<epicsMutex>::epicsGuard () at ../../../include/tsSLList.h:131
#7 0x0000002a96593c55 in ca_create_channel (name_str=0x776e00 "CCL_Diag:BLM102:MPSCalcPulseLossLimitRb", conn_func=0x42573c <ProcessVariable::connection_handler(connection_handler_args)>,
puser=0x776b90, priority=20, chanptr=0x40205b80) at ../access.cpp:315
#8 0x0000000000424731 in ProcessVariable::start (this=0x776b90,
guard=@0x40205bd0) at ../ProcessVariable.cpp:152
#9 0x000000000041f5d1 in SampleMechanism::start (this=0x776b70,
guard=@0x40205c50) at ../SampleMechanism.cpp:78 #10 0x00000000004161e7 in ArchiveChannel::start (this=0x776990,
guard=@0x40205cb0) at ../ArchiveChannel.cpp:240
#11 0x000000000040bcbc in Engine::start (this=0x56afe0,
engine_guard=@0x40205d30) at ../Engine.cpp:158
#12 0x00000000004109d4 in restart (connection=0x113e220, path=@0x40205db0, user_arg=0x56afe0) at ../EngineServer.cpp:598
#13 0x000000000041d304 in HTTPClientConnection::analyzeInput
(this=0x113e220) at ../HTTPServer.cpp:565
#14 0x000000000041cebb in HTTPClientConnection::handleInput
(this=0x113e220) at ../HTTPServer.cpp:482
#15 0x000000000041c9e6 in HTTPClientConnection::run (this=0x113e220) at ../HTTPServer.cpp:391
#16 0x0000002a967113b9 in epicsThreadCallEntryPoint (pPvt=0x113e228) at ../../../src/libCom/osi/epicsThread.cpp:59

Thread 7 (Thread 1078778208 (LWP 14385)):
#0 0x000000323430ad1b in __lll_mutex_lock_wait () from /lib64/tls/ libpthread.so.0
#1 0x000000000056b6d8 in ?? ()
#2 0x0000000000000000 in ?? ()

Thread 6 (Thread 1076926816 (LWP 14387)):
#0 0x00000032343088da in pthread_cond_wait@@GLIBC_2.3.2 () from / lib64/tls/libpthread.so.0
#1 0x0000002a9671a23d in condWait (condId=0xe3a8f0,
mutexId=0xe3a8c8) at ../../../src/libCom/osi/os/posix/osdEvent.c:78
#2 0x0000002a9671a591 in epicsEventWait (pevent=0xe3a8c0) at ../../../src/libCom/osi/os/posix/osdEvent.c:144
#3 0x0000002a967130f5 in epicsEvent::wait (this=0x6b4e00) at ../../../src/libCom/osi/epicsEvent.cpp:63
#4 0x0000002a965a2c29 in tcpSendThread::run (this=0x6b4be0) at ../
tcpiiu.cpp:85
#5 0x0000002a967113b9 in epicsThreadCallEntryPoint (pPvt=0x6b4be8) at ../../../src/libCom/osi/epicsThread.cpp:59

*** Also waiting for mutex held by CAC-UDP (thread 11) Thread 5 (Thread 1082222944 (LWP 14389)):
#0 0x000000323430ad1b in __lll_mutex_lock_wait () from /lib64/tls/ libpthread.so.0
#1 0x000000000056b6d8 in ?? ()
#2 0x0000000000000063 in ?? ()
#3 0x0000002a9671a0c1 in epicsMutexOsdLock (pmutex=0x56b750) at ../../../src/libCom/osi/os/posix/osdMutex.c:121
#4 0x0000002a967128f5 in epicsMutexLock (pmutexNode=0x56b790) at ../../../src/libCom/osi/epicsMutex.cpp:125
#5 0x0000002a96712bb4 in epicsMutex::lock (this=0x56b6d8) at ../../../src/libCom/osi/epicsMutex.cpp:213
#6 0x0000002a9621ca3d in epicsGuard<epicsMutex>::epicsGuard () at ../../../include/tsSLList.h:131
#7 0x0000002a965b4302 in ca_field_type (pChan=0x6eba50) at ../
oldChannelNotify.cpp:632
#8 0x0000000000425919 in ProcessVariable::connection_handler (arg= {chid = 0x6eba50, op = 6}) at ../ProcessVariable.cpp:424
#9 0x0000002a965b296b in oldChannelNotify::connectNotify (this=0x6eba50, guard=@0x40815da0) at ../oldChannelNotify.cpp:93 #10 0x0000002a9659cb50 in nciu::connect (this=0x706110, nativeType=6, nativeCount=1, sidIn=330230, guard=@0x40815da0) at ../nciu.cpp:169
#11 0x0000002a9658bcd7 in cac::createChannelRespAction (this=0x56ba40, mgr=@0x40815f70, iiu=@0x6b38a8, hdr=@0x6b3bb0) at ../
cac.cpp:1062
#12 0x0000002a9658c06e in cac::executeResponse (this=0x56ba40, mgr=@0x40815f70, iiu=@0x6b38a8, currentTime=@0x40815fa0, hdr=@0x6b3bb0, pMshBody=0x6bd5c0 "†&#64257;\") at ../cac.cpp:1121
#13 0x0000002a965a6d3f in tcpiiu::processIncoming (this=0x6b38a8, currentTime=@0x40815fa0, mgr=@0x40815f70) at ../tcpiiu.cpp:1224
#14 0x0000002a965a3dff in tcpRecvThread::run (this=0x6b3990) at ../ tcpiiu.cpp:530
#15 0x0000002a967113b9 in epicsThreadCallEntryPoint (pPvt=0x6b3998) at ../../../src/libCom/osi/epicsThread.cpp:59

Thread 4 (Thread 1090697568 (LWP 14391)):
#0 0x00000032343088da in pthread_cond_wait@@GLIBC_2.3.2 () from / lib64/tls/libpthread.so.0
#1 0x0000002a9671a23d in condWait (condId=0xe36760,
mutexId=0xe36738) at ../../../src/libCom/osi/os/posix/osdEvent.c:78
#2 0x0000002a9671a591 in epicsEventWait (pevent=0xe36730) at ../../../src/libCom/osi/os/posix/osdEvent.c:144
#3 0x0000002a967130f5 in epicsEvent::wait (this=0x6b3c08) at ../../../src/libCom/osi/epicsEvent.cpp:63
#4 0x0000002a965a2c29 in tcpSendThread::run (this=0x6b39e8) at ../
tcpiiu.cpp:85
#5 0x0000002a967113b9 in epicsThreadCallEntryPoint (pPvt=0x6b39f0) at ../../../src/libCom/osi/epicsThread.cpp:59

Thread 3 (Thread 1092553056 (LWP 13156)):
#0 0x000000323430ad1b in __lll_mutex_lock_wait () from /lib64/tls/ libpthread.so.0
#1 0x000000000056e9d8 in ?? ()
#2 0x0000003233a2f620 in __malloc_initialize_hook () from /lib64/tls/
libc.so.6
#3 0x0000003234307b1f in pthread_mutex_lock () from /lib64/tls/ libpthread.so.0
#4 0x000000000056e9b0 in ?? ()
#5 0x00000000411efba0 in ?? ()
#6 0x0000000000443d06 in ~epicsMutexGuard (this=0x56b1b8) at ../ Guard.h:30
#7 0x0000002a9671a0c1 in epicsMutexOsdLock (pmutex=0x56b1b0) at ../../../src/libCom/osi/os/posix/osdMutex.c:121
#8 0x0000002a967128f5 in epicsMutexLock (pmutexNode=0x56b230) at ../../../src/libCom/osi/epicsMutex.cpp:125
#9 0x00000000004439b3 in OrderedMutex::lock (this=0x56aff0,
file=0x449593 "../EngineServer.cpp", line=593) at ../OrderedMutex.cpp:
260
#10 0x00000000004416bc in Guard::lock (this=0x411efd30, file=0x449593 "../EngineServer.cpp", line=593) at ../Guard.cpp:41
#11 0x00000000004099b5 in Guard (this=0x411efd30, file=0x449593 "../ EngineServer.cpp", line=593, guardable=@0x56afe0) at ../../../../
include/Guard.h:67
#12 0x0000000000410990 in restart (connection=0x11709d0, path=@0x411efdb0, user_arg=0x56afe0) at ../EngineServer.cpp:593
#13 0x000000000041d304 in HTTPClientConnection::analyzeInput
(this=0x11709d0) at ../HTTPServer.cpp:565
#14 0x000000000041cebb in HTTPClientConnection::handleInput
(this=0x11709d0) at ../HTTPServer.cpp:482
#15 0x000000000041c9e6 in HTTPClientConnection::run (this=0x11709d0) at ../HTTPServer.cpp:391
#16 0x0000002a967113b9 in epicsThreadCallEntryPoint (pPvt=0x11709d8) at ../../../src/libCom/osi/epicsThread.cpp:59

Thread 2 (Thread 1088575840 (LWP 12394)):
#0 0x000000323430ad1b in __lll_mutex_lock_wait () from /lib64/tls/ libpthread.so.0
#1 0x000000000056e9d8 in ?? ()
#2 0x0000003233a2f620 in __malloc_initialize_hook () from /lib64/tls/
libc.so.6
#3 0x0000003234307b1f in pthread_mutex_lock () from /lib64/tls/ libpthread.so.0
#4 0x000000000056e9b0 in ?? ()
#5 0x0000000040e24ba0 in ?? ()
#6 0x0000000000443d06 in ~epicsMutexGuard (this=0x56b1b8) at ../ Guard.h:30
#7 0x0000002a9671a0c1 in epicsMutexOsdLock (pmutex=0x56b1b0) at ../../../src/libCom/osi/os/posix/osdMutex.c:121
#8 0x0000002a967128f5 in epicsMutexLock (pmutexNode=0x56b230) at ../../../src/libCom/osi/epicsMutex.cpp:125
#9 0x00000000004439b3 in OrderedMutex::lock (this=0x56aff0,
file=0x449593 "../EngineServer.cpp", line=593) at ../OrderedMutex.cpp:
260
#10 0x00000000004416bc in Guard::lock (this=0x40e24d30, file=0x449593 "../EngineServer.cpp", line=593) at ../Guard.cpp:41
#11 0x00000000004099b5 in Guard (this=0x40e24d30, file=0x449593 "../ EngineServer.cpp", line=593, guardable=@0x56afe0) at ../../../../
include/Guard.h:67
#12 0x0000000000410990 in restart (connection=0x1171270, path=@0x40e24db0, user_arg=0x56afe0) at ../EngineServer.cpp:593
#13 0x000000000041d304 in HTTPClientConnection::analyzeInput
(this=0x1171270) at ../HTTPServer.cpp:565
#14 0x000000000041cebb in HTTPClientConnection::handleInput
(this=0x1171270) at ../HTTPServer.cpp:482
#15 0x000000000041c9e6 in HTTPClientConnection::run (this=0x1171270) at ../HTTPServer.cpp:391
#16 0x0000002a967113b9 in epicsThreadCallEntryPoint (pPvt=0x1171278) at ../../../src/libCom/osi/epicsThread.cpp:59

Thread 1 (Thread 182914034624 (LWP 18263)):
#0 0x000000323430ad1b in __lll_mutex_lock_wait () from /lib64/tls/ libpthread.so.0
#1 0x000000000056e9d8 in ?? ()
#2 0x0000000000000000 in ?? ()

Original Mantis Bug: mantis-258
    http://www.aps.anl.gov/epics/mantis/view_bug_page.php?f_id=258

Tags: ca 3.14
Revision history for this message
Jeff Hill (johill-lanl) wrote :

possibly related to #257

Revision history for this message
Jeff Hill (johill-lanl) wrote :

I see that this occurs when the lock was released in the middle of switching the channel from a search timer to a TCP circuit, but due to changes in the timer library some time back we no longer need to release the lock when calling epicsTimer::start if we also hold this same lock in the timer expire callback. This is necessary when calling epicsTimer::cancel(), but epicsTimer::start no longer calls epicsTimer::cancel().

edited on: 2006-05-22 18:33

Revision history for this message
Jeff Hill (johill-lanl) wrote :
Download full text (4.0 KiB)

These changes should resolve the problem.

cvs diff -u -wb -i -- searchTimer.cpp tcpRecvWatchdog.cpp (in directory D:\users\hill\R3.14.dll_hell_fix\epics\base\src\ca\)
Index: searchTimer.cpp
===================================================================
RCS file: /net/phoebus/epicsmgr/cvsroot/epics/base/src/ca/searchTimer.cpp,v
retrieving revision 1.33.2.13
diff -c -u -w -b -i -r1.33.2.13 searchTimer.cpp
cvs diff: conflicting specifications of output style
--- searchTimer.cpp 13 Apr 2005 17:28:14 -0000 1.33.2.13
+++ searchTimer.cpp 22 May 2006 22:34:35 -0000
@@ -340,8 +340,6 @@
             this->searchResponses++;
             if ( this->searchResponses == this->searchAttempts ) {
                 if ( this->chanListReqPending.count () ) {
- // avoid timer cancel block deadlock
- epicsGuardRelease < epicsMutex > unguard ( guard );
                     //
                     // when we get 100% success immediately
                     // send another search request
Index: tcpRecvWatchdog.cpp
===================================================================
RCS file: /net/phoebus/epicsmgr/cvsroot/epics/base/src/ca/tcpRecvWatchdog.cpp,v
retrieving revision 1.33.2.16
diff -c -u -w -b -i -r1.33.2.16 tcpRecvWatchdog.cpp
cvs diff: conflicting specifications of output style
--- tcpRecvWatchdog.cpp 8 Nov 2005 21:25:23 -0000 1.33.2.16
+++ tcpRecvWatchdog.cpp 22 May 2006 22:34:52 -0000
@@ -96,7 +96,6 @@
 {
     guard.assertIdenticalMutex ( this->mutex );
     if ( ! ( this->shuttingDown || this->beaconAnomaly || this->probeResponsePending ) ) {
- epicsGuardRelease < epicsMutex > unguard ( guard );
         this->timer.start ( *this, this->period );
         debugPrintf ( ("saw a normal beacon - reseting circuit receive watchdog\n") );
     }
@@ -124,11 +123,6 @@

     if ( ! ( this->shuttingDown || this->probeResponsePending ) ) {
         this->beaconAnomaly = false;
- // dont hold the lock for fear of deadlocking
- // because cancel is blocking for the completion
- // of expire() which takes the lock - it take also
- // the callback lock
- epicsGuardRelease < epicsMutex > unguard ( guard );
         this->timer.start ( *this, this->period );
         debugPrintf ( ("received a message - reseting circuit recv watchdog\n") );
     }
@@ -158,8 +152,6 @@
         }
     }
     if ( restartNeeded ) {
- // timer callback takes the callback mutex and the cac mutex
- epicsGuardRelease < epicsMutex > cbGuardRelease ( cbGuard );
         this->timer.start ( *this, restartDelay );
         debugPrintf ( ("recv wd restarted with delay %f\n", restartDelay) );
     }
@@ -189,11 +181,6 @@
     // not trust the beacon as an indicator of a healthy server until we
     // receive at least one message from the server.
     if ( this->probeResponsePending && ! this->shuttingDown ) {
- // we avoid calling this with the lock applied because
- // it restarts the recv wd timer, this might block
- // until a recv wd timer expire callback completes, and
- // this callback takes the lock
- epicsGuardRelease < epicsMutex > unguard ( guar...

Read more...

Revision history for this message
Jeff Hill (johill-lanl) wrote :

I think that you have found another bug. Thanks for doing an excellent job of beating the vermin out of CA.

I see that this occurs when the lock was released in the middle of switching the channel from a search timer to a TCP circuit, but due to changes in the timer library some time back we no longer need to release the lock when calling epicsTimer::start if we also hold this same lock in the timer expire callback. This is necessary when calling epicsTimer::cancel(), but epicsTimer::start no longer calls epicsTimer::cancel().

The details, including a patch, are in Mantis 258. Please try the patch and let me know if it fixes the problems. The simple change in searchTimer.cpp alone should take care of your bug, but if you have time please try also the changes in tcpRecvWatchdog.cpp that are of a similar nature and should make the code more robust.

Revision history for this message
Jeff Hill (johill-lanl) wrote :

I also created Mantis 257, but suspect that Mantis 258 bug may be causing all of the problems you have reported lately which are maybe all related to deleting a channel at the precise instant that it is being transferred to a TCP circuit (because a udp search response has arrived).

edited on: 2006-05-22 18:41

Revision history for this message
Jeff Hill (johill-lanl) wrote :

fixed in R3.14.9

Revision history for this message
Andrew Johnson (anj) wrote :

R3.14.9 Released.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.