Timeouts are defined with a #define in microseconds. Some hardware have latency that can exceed 999999 microseconds

Bug #463299 reported by Sisyph on 2009-10-29
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
libmodbus
Fix Committed
Wishlist
Stéphane Raimbault

Bug Description

I'm using libmodbus to access a hardware with tcp. It usually answers pretty fast (some milliseconds). But sometimes it "lags" and answer within seconds (2 seconds typically).

My first try was to change the defined timeout, but the way it is coded, I can't exceed 999999 µs (code use the tv_usec field of time struct used by select). I have modified the code to set the timeout to 10 seconds in order to measure my hardware latency.

I can handle this in my application by reconnecting and resending the command. But as the hardware send a response, I feel that giving the libmodbus users a way to handle greater timeouts is better.

One solution might be to convert the value in seconds and microseconds to be able to set the timeouts to values greater than 999999.

Another (better) way might be to provide functions to set the timeouts.

What do you think ?

Sisyph (eric-paul) wrote :

By the way, I've coded functions to set timouts. I've attached the patch if someone is interested,

   my 2 cents.

Stéphane Raimbault (sra) wrote :

Thank you for your patch, just added to my commit queue (yes I'm a bit slow to answer)!

Changed in libmodbus:
importance: Undecided → Wishlist
assignee: nobody → Stéphane Raimbault (sra)
Changed in libmodbus:
status: New → Fix Committed
Stéphane Raimbault (sra) wrote :

New functions to get/set timeouts have been implemented:
http://github.com/stephane/libmodbus/commit/d8f254779570daf8bba60819fe677af4cba8c87a

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers