Timeouts are defined with a #define in microseconds. Some hardware have latency that can exceed 999999 microseconds
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
libmodbus |
Fix Committed
|
Wishlist
|
Stéphane Raimbault |
Bug Description
I'm using libmodbus to access a hardware with tcp. It usually answers pretty fast (some milliseconds). But sometimes it "lags" and answer within seconds (2 seconds typically).
My first try was to change the defined timeout, but the way it is coded, I can't exceed 999999 µs (code use the tv_usec field of time struct used by select). I have modified the code to set the timeout to 10 seconds in order to measure my hardware latency.
I can handle this in my application by reconnecting and resending the command. But as the hardware send a response, I feel that giving the libmodbus users a way to handle greater timeouts is better.
One solution might be to convert the value in seconds and microseconds to be able to set the timeouts to values greater than 999999.
Another (better) way might be to provide functions to set the timeouts.
What do you think ?
Changed in libmodbus: | |
status: | New → Fix Committed |
By the way, I've coded functions to set timouts. I've attached the patch if someone is interested,
my 2 cents.