Video conferencing

Bug #141379 reported by sander on 2007-09-20
14
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Coccinella
Wishlist
buzzdee

Bug Description

It would be nice to have support for video conferencing.

XEPs:
http://www.xmpp.org/extensions/xep-0180.html

background information
http://en.wikipedia.org/wiki/Videoconferencing

FAQ entry that needs an update when this bug is fixed:
http://thecoccinella.org/faq/voip

See also Bug #141350 (Voice conference with multiple users)

Webcam driver for Mac OS X (also used by aMSN):
http://webcam-osx.sourceforge.net/

sander (s-devrieze) on 2007-09-20
description: updated
description: updated
sander (s-devrieze) on 2007-09-21
Changed in coccinella:
assignee: nobody → matsben
importance: Undecided → Wishlist
Mats (matsben) wrote :

The iaxclient 2.0 trunk has a video solution based on theora, but I don't know how stable that is.
In any case, this will likely be the base for the first attempt, and not a RTP based one
as the XEP describes. This is the same issue as with GTalk and others voip solutions
which are also RTP based.

Put this in perspective, not even Skype made their own video solution, but had to buy it
for an unknown amount (likely around 100 M euro or so), and Google bought a
Swedish company for a large amount of money. This describes the complexity of the task.

However, I'm the first to admit that this would be an exciting thing to do.

antoniofcano (antoniofcano) wrote :

I'd like to try with the new iaxclient version. The RTP solution it is more complicated due to the high amount of libraries needed (and the wrappers for tcl).

You are very welcome to try this. This is a very high risk project but
therefore also a lot of fun. It is very low on my priority list.
But it would be extremely cool to have it.

On 10/23/07, antoniofcano <email address hidden> wrote:
>
> I'd like to try with the new iaxclient version. The RTP solution it is
> more complicated due to the high amount of libraries needed (and the
> wrappers for tcl).
>
> --
> Video conferencing
> https://bugs.launchpad.net/bugs/141379
> You received this bug notification because you are a bug assignee.
>

antoniofcano (antoniofcano) wrote :

Now I've got build the new iaxclient 2.0 with video support and upgraded the libiaxclient tcl extension. I need to make some tests and get a clearer idea of about the video calls logic for update the IAXPhone component.

Best wishes,

Mats (matsben) wrote :
Download full text (3.8 KiB)

El jue, 25-10-2007 a las 16:14 +0100, Mats Bengtsson escribió:
> Well, I don't have much of ideas either ;-)
> I don't know the code.
> Start with SDL and try make that work then I could try to make a video widget.

For me perfect, first make it works. Have you try to test in your
computer? I have a linux and a mac, the first work but I'm not able to
build the second one. My girlfriend has got a winbox but is a little
older and she doesn't let me install devil dev tools.

> But the frames come uncompressed in YUV format from some callback on
> another thread,
> like the State callbacks?
> Then it has to be delivered to the main UI thread like we do with the
> callbacks, and keep
> track of mem alloc stuff. It will likely take some effort to make
> efficient code since
> it takes a lot of cpu otherwise. Perhaps it is worth to look inside
> SDL and see how they do it?
> Then just extract what we need.
>

Ok, like other callbacks (netstat, level, call_state) the new iaxclient
offer a new one called video. This callback gets an iaxc_ev_video struct
wich has the properties:
        http://iaxclient.sourceforge .net/doc/html/structiaxc__ev_ _video.html

One of this properties is a char* data wich includes de video image.

The Event function in the tcl extension:
static void EventVideo(struct iaxc_ev_video video)
{
   MUTEXLOCK(&notifyRecordMutex);
   if (sNotifyRecord[kNotifyCmdVideo ]) {
       char *cmd;
       char buf[32];
       int len;
       Tcl_DString ds;

       if ( !video.encoded ) {
               if ( video_mode ) {
                       display_video(video, video.source == IAXC_SOURCE_REMOTE);
               }

               Tcl_DStringInit(&ds);
               cmd = Tcl_GetStringFromObj(sNotifyRe cord[kNotifyCmdVideo],
&len);
               Tcl_DStringAppend(&ds, cmd, len);

               /* Remote or local frame */
               sprintf(buf, "%d", video.callNo);
               Tcl_DStringAppendElement(&ds, buf);

               sprintf(buf, "%d", video.ts);
               Tcl_DStringAppendElement(&ds, buf);

               Tcl_DStringAppendElement(&ds, GetMapIntString(mapFormat,
video.format));

               sprintf(buf, "%d", video.width);
               Tcl_DStringAppendElement(&ds, buf);

               sprintf(buf, "%d", video.height);
               Tcl_DStringAppendElement(&ds, buf);

               sprintf(buf, "%d", video.encoded);
               Tcl_DStringAppendElement(&ds, buf);

               sprintf(buf, "%d", video.source);
               Tcl_DStringAppendElement(&ds, buf);

               sprintf(buf, "%d", video.size);
               Tcl_DStringAppendElement(&ds, buf);

               XThread_EvalInThread(sMainThre adID, Tcl_DStringValue(&ds), 0);
               Tcl_DStringFree(&ds);
       } else {
// Tcl_SetObjResult(interp, Tcl_NewStringObj("We cannot handle encoded
video in callbacks yet\n", -1)); }
       }
   }
   MUTEXUNLOCK(&notifyRecordMutex );
}

You'll see at the beginning that I call a display_video function, there
is where the SDL functions are executed for showing the webcam image. If
we've got a Tk w...

Read more...

Changed in coccinella:
status: New → In Progress
Mats (matsben) wrote :

What I have to do is therefore to create a dummy widget with a call so it can
redraw itself from some low level raw image format, like RGB or the YUV.
I guess I can start from http://tktoolkit.cvs.sourceforge.net/tktoolkit/tk/generic/tkSquare.c?revision=1.7.2.2&view=markup

Mats (matsben) wrote :

On 10/25/07, Antonio F. Cano <email address hidden> wrote:
> Hello,
>
> A first idea of how to include the video conference into the actual IAX
> implementation of Coccinella.
>
> First, we've got presence... How we tell to other users that we've got
> video webcam running. For this task the library includes a
> iaxclient::videocameraworking that returns 1 or 0. By now it fails
> (recognized bug) and returns ever 0 :(. That is optional and only for
> information.

The standard way is using caps
   0115 | Entity Capabilities | complete 1.3 (1.4) (**

But there is also an old XEP (JEP) that you wrote that could get an update for video?

>
> Two, outgoing video call. Simply call iaxclient::videostart before make
> the dial to the other peer. If the other peer doesn't support video we
> don't have to worry because iaxclient negotiate all the call before
> connect.
>
> Three,incoming call. At startup we register a VideoEvent and we call
> iaxclient::videostart with the first call to this event after answering
> the call.
>
> With hangup, if we have started video then we have to call
> iaxclient::videostop.
>

Something like this. But if we use CAPS we already know if the user has video support
but we don't know if they want to use it.

antoniofcano (antoniofcano) wrote :

> > A first idea of how to include the video conference into the actual IAX
> > implementation of Coccinella.
> >
> > First, we've got presence... How we tell to other users that we've got
> > video webcam running. For this task the library includes a
> > iaxclient::videocameraworking that returns 1 or 0. By now it fails
> > (recognized bug) and returns ever 0 :(. That is optional and only for
> > information.
>
> The standard way is using caps
> 0115 | Entity Capabilities | complete 1.3 (1.4) (**
>
> But there is also an old XEP (JEP) that you wrote that could get an
> update for video?
>

Better with CAPs wich is an active XEP. The document that I wrote is
valid for video and audio because only talks about the way can contact
the other user.

> >
> > Two, outgoing video call. Simply call iaxclient::videostart before make
> > the dial to the other peer. If the other peer doesn't support video we
> > don't have to worry because iaxclient negotiate all the call before
> > connect.
> >
> > Three,incoming call. At startup we register a VideoEvent and we call
> > iaxclient::videostart with the first call to this event after answering
> > the call.
> >
> > With hangup, if we have started video then we have to call
> > iaxclient::videostop.
> >
>
> Something like this. But if we use CAPS we already know if the user has video support
> but we don't know if they want to use it.
>

Allright, what about having two buttons one for audio call and the
other one for video call. In that way the user select wich kind to
use.

I'm still having problems with the OSX build, but the iaxclient team
is helping me.

Mats (matsben) wrote :

I have just added a skeleton video widget in trunk/contrib/tcl/video.c
which currently doesn't do anything. It is a bit unclear for me just
now how the frames should be drawn and perhaps using a bit blip
function which must be very fast. It is also unclear how it shall be
integrated with the other tcl package code.

sander (s-devrieze) wrote :

2007/10/26, antoniofcano <email address hidden>:
> Allright, what about having two buttons one for audio call and the
> other one for video call. In that way the user select wich kind to
> use.

Isn't it better to have just a checkbox called "Enable video" in the
call dialog?

Mats (matsben) wrote :

It's a bit early to decide on this when we don't have any code running.
Since Antonio is using SDL and I think this is a bit large to include
in a distro
we (I) must come up with something more leightweight. I need to understand it
better though.

On 10/27/07, sander <email address hidden> wrote:
> 2007/10/26, antoniofcano <email address hidden>:
> > Allright, what about having two buttons one for audio call and the
> > other one for video call. In that way the user select wich kind to
> > use.
>
> Isn't it better to have just a checkbox called "Enable video" in the
> call dialog?
>
> --
> Video conferencing
> https://bugs.launchpad.net/bugs/141379
> You received this bug notification because you are a bug assignee.
>

Mats (matsben) wrote :

The difficult part with writing a video widget is to do the "bit blitting" to the actual window. In other words, use a raw in memory RGB (or YUV) data and draw this to screen. Tk uses an abstract data type

    Drawable d;

to describe the destination context. Only on X11 is this something real. On Windows it looks like

     TkWinDrawable *twdPtr = (TkWinDrawable *) d;

where TkWinDrawable is a struct which also contains a bitmap.handle.
On MacOSX it looks like

    CGrafPtr port;
    port = TkMacOSXGetDrawablePort(d);

and then it is possible to draw raw image data using something like:

    PathSetUpCGContext(port, &cgContext);
...
    provider = CGDataProviderCreateWithData(NULL, block.pixelPtr, size, NULL);
    colorspace = CGColorSpaceCreateDeviceRGB();
    cgImage = CGImageCreate(block.width, block.height,
            8, /* bitsPerComponent */
            block.pixelSize*8, /* bitsPerPixel */
            block.pitch, /* bytesPerRow */
            colorspace, /* colorspace */
            alphaInfo, /* alphaInfo */
            provider, NULL,
            1, /* shouldInterpolate */
            kCGRenderingIntentDefault);
...
    CGContextDrawImage(context->c, CGRectMake(0.0, 0.0, width, height), cgImage);

Code from TkPath. I see now that all QuickDraw (CGrafPtr) code need to go away for 8.5 since Daniel has replaced everything with CoreGraphics.

Bottom line: This will need a great deal of platform dependent code. The SDL sources contain tons of examples but the problem is to understand them. I guess we only need to implement this for the special case of RGB32 or YUV formats. That should be easier. It is like drawing a tk image in TkPath.

antoniofcano (antoniofcano) wrote :

> The difficult part with writing a video widget is to do the "bit
> blitting" to the actual window. In other words, use a raw in memory RGB
> (or YUV) data and draw this to screen. Tk uses an abstract data type
>
> Drawable d;
>
> to describe the destination context. Only on X11 is this something real.
> On Windows it looks like
>
> TkWinDrawable *twdPtr = (TkWinDrawable *) d;
>
> where TkWinDrawable is a struct which also contains a bitmap.handle.
> On MacOSX it looks like
>
> CGrafPtr port;
> port = TkMacOSXGetDrawablePort(d);
>
> and then it is possible to draw raw image data using something like:
>
> PathSetUpCGContext(port, &cgContext);
> ...
> provider = CGDataProviderCreateWithData(NULL, block.pixelPtr, size, NULL);
> colorspace = CGColorSpaceCreateDeviceRGB();
> cgImage = CGImageCreate(block.width, block.height,
> 8, /* bitsPerComponent */
> block.pixelSize*8, /* bitsPerPixel */
> block.pitch, /* bytesPerRow */
> colorspace, /* colorspace */
> alphaInfo, /* alphaInfo */
> provider, NULL,
> 1, /* shouldInterpolate */
> kCGRenderingIntentDefault);
> ...
> CGContextDrawImage(context->c, CGRectMake(0.0, 0.0, width, height), cgImage);
>
> Code from TkPath. I see now that all QuickDraw (CGrafPtr) code need to
> go away for 8.5 since Daniel has replaced everything with CoreGraphics.
>

I'm not able to understand this Tk stuff :O. I'll read it calmly this weekend.

> Bottom line: This will need a great deal of platform dependent code. The
> SDL sources contain tons of examples but the problem is to understand
> them. I guess we only need to implement this for the special case of
> RGB32 or YUV formats. That should be easier. It is like drawing a tk
> image in TkPath.
>

Until next week I've not been able to work again in the video stuff. Sorry.

Mats (matsben) wrote :

No worry. I think this is my job to solve since I have more experience
writing such code. Just go on using the SDL library and I'll see what
I can do about it.

On 10/29/07, antoniofcano <email address hidden> wrote:
> > The difficult part with writing a video widget is to do the "bit
> > blitting" to the actual window. In other words, use a raw in memory RGB
> > (or YUV) data and draw this to screen. Tk uses an abstract data type
> >
> > Drawable d;
> >
> > to describe the destination context. Only on X11 is this something real.
> > On Windows it looks like
> >
> > TkWinDrawable *twdPtr = (TkWinDrawable *) d;
> >
> > where TkWinDrawable is a struct which also contains a bitmap.handle.
> > On MacOSX it looks like
> >
> > CGrafPtr port;
> > port = TkMacOSXGetDrawablePort(d);
> >
> > and then it is possible to draw raw image data using something like:
> >
> > PathSetUpCGContext(port, &cgContext);
> > ...
> > provider = CGDataProviderCreateWithData(NULL, block.pixelPtr, size,
> NULL);
> > colorspace = CGColorSpaceCreateDeviceRGB();
> > cgImage = CGImageCreate(block.width, block.height,
> > 8, /*
> bitsPerComponent */
> > block.pixelSize*8, /* bitsPerPixel */
> > block.pitch, /* bytesPerRow */
> > colorspace, /* colorspace */
> > alphaInfo, /* alphaInfo */
> > provider, NULL,
> > 1, /*
> shouldInterpolate */
> > kCGRenderingIntentDefault);
> > ...
> > CGContextDrawImage(context->c, CGRectMake(0.0, 0.0, width, height),
> cgImage);
> >
> > Code from TkPath. I see now that all QuickDraw (CGrafPtr) code need to
> > go away for 8.5 since Daniel has replaced everything with CoreGraphics.
> >
>
> I'm not able to understand this Tk stuff :O. I'll read it calmly this
> weekend.
>
> > Bottom line: This will need a great deal of platform dependent code. The
> > SDL sources contain tons of examples but the problem is to understand
> > them. I guess we only need to implement this for the special case of
> > RGB32 or YUV formats. That should be easier. It is like drawing a tk
> > image in TkPath.
> >
>
> Until next week I've not been able to work again in the video stuff.
> Sorry.
>
> --
> Video conferencing
> https://bugs.launchpad.net/bugs/141379
> You received this bug notification because you are a bug assignee.
>

sander (s-devrieze) on 2007-12-29
description: updated
sander (s-devrieze) wrote :

The open video codec Theora 1.0 has been released: http://www.xiph.org/press/2008/theora-release-1.0/

buzzdee (sebastia) wrote :

I bought a uvideo(4) based USB webcam for my OpenBSD box. I found that, after some patching, it is working fine with tkv4lng from here: http://www.ch-werner.de/tkv4lng/
So for Linux/OpenBSD clients, this potentially could be used to get the data from the Webcam (:

buzzdee (sebastia) on 2011-02-16
Changed in coccinella:
milestone: none → 0.96.22
assignee: Mats (matsben) → buzzdee (sebastia)
buzzdee (sebastia) wrote :

Here: http://mediatools.cs.ucl.ac.uk/nets/mmedia/

I found vic and rat, both tools are for RTP streams, vic is for video, rat for audio.
I got a proof of concept video stream working with vic, audio not yet tested with rat.
vic and rat is a mixture of c/c++ and tcl/tk, and integrates fairly nicely into coccinella.
The code I have running is very ugly yet, and will need a lot of cleanup, vic needs some patches to make the experience with coccinella a bit better. Some first patches already made it upstream to make vic work on OpenBSD. Other patches are sent upstream, adding some command line options.

https://frostie.cs.ucl.ac.uk/repos/mmedia/common/trunk
https://frostie.cs.ucl.ac.uk/repos/mmedia/rat/trunk
https://frostie.cs.ucl.ac.uk/repos/mmedia/vic/branches/mpeg4

I still need to look about the transport, looked at ICE-UDP, but there coccinella would need a stun server.
I guess XEP-0177: Jingle Raw UDP Transport Method has to do the trick for the time being, until the stun.tcl is enhanced with stun server.

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers