CA subscription element count is fixed - eliminates compressed video option

Reported by Jeff Hill on 2005-11-17
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
EPICS Base
Wishlist
Unassigned

Bug Description

Subject: RE: Real time video with epics

> To overcome this problem we thought about using compression algorithms,
> but the CA protocol does not allow to resize dynamically (at run time) data blocks of
> waveforms. This implies that images, which are transferred from the EPICS server to the
> client application have a fixed size, and this, in turns, limits the use of image compressing
> algorithms, I hope this limitation will disappear with V4.

Actually, strictly speaking, this isnt a protocol limitation. The maximum number of elements is fixed at connect time and further restricted by the subscription, but the current number of elements is passed with every subscription update – both through the protocol and also through the subscription update interface.

Looking in the source code I see that the legacy db_get_field() function always returns the requested number of elements. It zeros out the remaining elements if there are less than the requested number of elements in the database. In contrast, the newer interface, dbGetField, has a RW parameter specifying the requested number of elements, and returning the actual number of elements.

So the issue appears to be fixable w/o protocol or interface changes although we can anticipate that some of the client side tools might not be robust in situations where the element count supplied with the subscription update is less than the maximum that was specified when entering the subscription.

Jeff

-----Original Message-----
From: Aladwan Ahed
Sent: Thursday, November 17, 2005 4:04 AM
To: Hunt Steven
Cc: 'Kay-Uwe Kasemir'; Emmanuel Mayssat; Tech-talk
Subject: RE: Real time video with epics

Hi All,

On Wednesday, November 16, 2005 10:36 PM, Steven Hunt wrote:

> The original firewire support for Epics I wrote (a long time ago), used
> periodic record scanning. This has the nasty effect that you have to
> scan at twice the frame rate to be sure you miss nothing.

The latest version of the software uses a non blocking poll mechanism, this means, when we read from the video1394 dma ring buffer, the caller will not wait if there is no ready frame, more over, we set a flag (drop_frames) that forces the caller to drop all the frames in the buffer and take the latest frame captured. This guarantees that each time we made a call we get the last frame captured.

> but you can transfer the parameters
> (beam size and position for instance), and only send the image at a
> lower rate.

That what we do too, but, we also do more processing which justifies the high CPU load for the video server PC (2.8 GHz, 1GB RAM, typical load 80%), we do online centroid finding algorithm, background subtraction, averaging, horizontal and vertical distribution/profile calculation, region of interest selection, in addition to other calculations like the maximums and the standard deviation. All of these values are available usually at 10 Hz. Recently the scientist is asking for implementing 2D Gaussian fit and the list will continuo to grow. Steve, I am really wondering if you have a dedicated IOC to run your video server on, can you give more information about the performance?

The whole setup we are using costs around USD 1400 (Flea point grey camera, SUSI pundit PC). As Linux RH7.3 is not a real time OS, if RT becomes a requirement, we might consider Linux RT.

>> On Wednesday, November 16, 2005 9:01 PM, Kay-Uwe Kasemir wrote:

>> But under vxWorks, especially after increasing the vxWorks 'tick'
>> clock rate, one can usually simply change the menuScan.dbd file
>> and add e.g. ".05 second" and voila:
>> "epics can ... process .. at a maximum rate of 20Hz".

In our setup, the Linux system ticks is 100 Hz, we modified the menuScan.dbd to different values, like 15 Hz, 30 Hz, and 50 Hz. I managed to process 15 frames/sec with our Camera (1024*768), when I select a region of interest, I processes all the 30 frames the camera is able to capture, which is similar to choosing a Camera with lower resolution.

>> Just like ADC driver/device support can be written to process
>> records on "I/O Intr" whenever the ADC receives a trigger,
>> you can write your frame grabber support to trigger record
>> processing whenever the frame grabber gets a new image.
>> The IOC application developer guide has details on the
>> "I/O Intr" mechanism. The EPICS "Event" mechanism might also
>> work.

Now we trigger the camera externally to synchronize it with the Laser source, and implementing the "I/O Intr" method of grabbing the frames is a better choice.

>> The next problem:
>> When records process at a high rate _and_ there are
>> channel access clients which subscribed to updates from those
>> records, every time the records process, data is sent to those
>> clients (ignoring some ADEL/MDEL details).
>> In the case of the SNS LLRF, those records include waveforms.
>> All is OK as long as only one set of EDM screens is displaying
>> those waveforms.

To overcome this problem we thought about using compression algorithms, but the CA protocol does not allow to resize dynamically (at run time) data blocks of waveforms. This implies that images, which are transferred from the EPICS server to the client application have a fixed size, and this, in turns, limits the use of image compressing algorithms, I hope this limitation will disappear with V4.

Ahed

Original Mantis Bug: mantis-227
    http://www.aps.anl.gov/epics/mantis/view_bug_page.php?f_id=227

Tags: ca Edit Tag help
Andrew Johnson (anj) wrote :

Fixed with the dynamic-array development work.

Changed in epics-base:
status: New → Fix Committed
Andrew Johnson (anj) on 2010-11-24
Changed in epics-base:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers