The difference between expected and actual seems ~= epicsThreadSleepQuantum() (aka. 1.0/sysClkRateGet() ). This is the granularity of epicsEventWaitWithTimeout() which is ultimately (I think) what the timer queue is using.
The code in epicsEventWaitWithTimeout() may explain what the -quantum()/2 was doing.
The difference between expected and actual seems ~= epicsThreadSlee pQuantum( ) (aka. 1.0/sysClkRateGet() ). This is the granularity of epicsEventWaitW ithTimeout( ) which is ultimately (I think) what the timer queue is using.
The code in epicsEventWaitW ithTimeout( ) may explain what the -quantum()/2 was doing.
> if (timeOut <= 0.0) { Get()
> ticks = 0;
> } else if (timeOut >= (double) INT_MAX / rate) {
> ticks = WAIT_FOREVER;
> } else {
> ticks = timeOut * rate; // MD - rate=sysClkRate
> if (ticks <= 0)
> ticks = 1;
> }
Previously, subtracting could result in timeOut <=0. Now, this is never the case so the minimum wait is 1 tick.