Alex, we dont have the infrastructure yet you are talking about here.
Michael has written a proposal how to store tags on image regions, but that is not yet implemented. We also dont have any UI, not at all.
We currently do not use a canvas like QGraphicsView. There is related work to do that in the image editor.
We also dont use QImage, but a custom image container...
What part exactly do you intend to test? Is the workflow fully implemented?
> digikam library -> use haarfeatures to detect faces on image -> use face
> recognition on face identified by classifier -> output to digikam to confirm
I imagine there must be a widget to show boxes (autodetected) and to allow to add boxes (manually), then associate a box with a tag (with a tag, a person, and a face identifier).
Should this for starting be done in a separate window? It would be easier then to play with the UI.
Where and how do you store the results of learning?
Alex, we dont have the infrastructure yet you are talking about here.
Michael has written a proposal how to store tags on image regions, but that is not yet implemented. We also dont have any UI, not at all.
We currently do not use a canvas like QGraphicsView. There is related work to do that in the image editor.
We also dont use QImage, but a custom image container...
What part exactly do you intend to test? Is the workflow fully implemented?
> digikam library -> use haarfeatures to detect faces on image -> use face
> recognition on face identified by classifier -> output to digikam to confirm
I imagine there must be a widget to show boxes (autodetected) and to allow to add boxes (manually), then associate a box with a tag (with a tag, a person, and a face identifier).
Should this for starting be done in a separate window? It would be easier then to play with the UI.
Where and how do you store the results of learning?