If I am willing to let some clients give up earlier and set EPICS_CA_CONN_TMO=10, the effective time to shut-down (...in that somewhat unusual scenario...) still takes 30s.
I think it's in the last step of tcpSendThread::run - that uses 30s no matter what I have in my environment:
while ( ! this->iiu.recvThread.exitWait ( 30.0 ) ) {
// it is possible to get stuck here if the user calls
// ca_context_destroy() when a circuit isnt known to
// be unresponsive, but is. That situation is probably
// rare, and the IP kernel might have a timeout for
// such situations, nevertheless we will attempt to deal
// with it here after waiting a reasonable amount of time
// for a clean shutdown to finish.
epicsGuard < epicsMutex > guard ( this->iiu.mutex ); this->iiu.initiateAbortShutdown ( guard );
}
I don't know if those 30s should be EPICS_CA_CONN_TMO or a separate environment for more granularity. but it seems to me the value should default to EPICS_CA_CONN_TMO
If I am willing to let some clients give up earlier and set EPICS_CA_ CONN_TMO= 10, the effective time to shut-down (...in that somewhat unusual scenario...) still takes 30s.
I think it's in the last step of tcpSendThread::run - that uses 30s no matter what I have in my environment:
while ( ! this->iiu. recvThread. exitWait ( 30.0 ) ) { destroy( ) when a circuit isnt known to
this-> iiu.initiateAbo rtShutdown ( guard );
// it is possible to get stuck here if the user calls
// ca_context_
// be unresponsive, but is. That situation is probably
// rare, and the IP kernel might have a timeout for
// such situations, nevertheless we will attempt to deal
// with it here after waiting a reasonable amount of time
// for a clean shutdown to finish.
epicsGuard < epicsMutex > guard ( this->iiu.mutex );
}
I don't know if those 30s should be EPICS_CA_CONN_TMO or a separate environment for more granularity. but it seems to me the value should default to EPICS_CA_CONN_TMO
Thank you