Timeouts are defined with a #define in microseconds. Some hardware have latency that can exceed 999999 microseconds

Bug #463299 reported by Sisyph on 2009-10-29
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Fix Committed
Stéphane Raimbault

Bug Description

I'm using libmodbus to access a hardware with tcp. It usually answers pretty fast (some milliseconds). But sometimes it "lags" and answer within seconds (2 seconds typically).

My first try was to change the defined timeout, but the way it is coded, I can't exceed 999999 µs (code use the tv_usec field of time struct used by select). I have modified the code to set the timeout to 10 seconds in order to measure my hardware latency.

I can handle this in my application by reconnecting and resending the command. But as the hardware send a response, I feel that giving the libmodbus users a way to handle greater timeouts is better.

One solution might be to convert the value in seconds and microseconds to be able to set the timeouts to values greater than 999999.

Another (better) way might be to provide functions to set the timeouts.

What do you think ?

Sisyph (eric-paul) wrote :

By the way, I've coded functions to set timouts. I've attached the patch if someone is interested,

   my 2 cents.

Stéphane Raimbault (sra) wrote :

Thank you for your patch, just added to my commit queue (yes I'm a bit slow to answer)!

Changed in libmodbus:
importance: Undecided → Wishlist
assignee: nobody → Stéphane Raimbault (sra)
Changed in libmodbus:
status: New → Fix Committed
Stéphane Raimbault (sra) wrote :

New functions to get/set timeouts have been implemented:

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers