Comment 5 for bug 1434434

Revision history for this message
RaiMan (raimund-hocke) wrote :

@Eugene
yep, that is the main line to follow.

My idea is to generally use keys to tell the recorder what to do:
e.g. ctrl-alt-action, where action could be
c for click
d for double click
r for right click
m for a complex mouse action (configured on the fly)
t for type
p for paste
s for a configurable wheel action (scroll)
w for a wait on this image
v to wait for the image to vanish
….
this is accompanied by some floating palettes with situation dependent content.
so the user would just move the mouse to the next place and press the relevant key.

I am watching a promising Java library, that allows to intercept keyboard and mouse actions at the system level, so this would not need any hotkey usage.

Going this way, we will not have any problems with visually changing elements, since the recorder can:
- move the mouse away and capture the inactive state
- move the mouse back and capture the active state
if the user signals, that it is a vivid element (or it might even be checked automatically - only some 10 milliseconds)

the recorder outcome will be some meta (I guess I will use YAML), that can be edited and saved.

You might get it as Python, Ruby, JavaScript or Java code (other translators can be added/contributed), to use it in your script or if sufficient, just run it as is. (This is similar to what Selenium offers).

After a workflow snippet is recorded, one might run it in the recorder to automatically add some timing parameters like max waiting times.