+ // Media loading flags. See: + // http://www.whatwg.org/specs/web-apps/current-work/#video) + PRUint16 mNetworkState; + PRUint16 mReadyState; Can we have typedefs for PRUint16 defined for these? + PRBool mBegun; + PRBool mLoadedFirstFrame; + PRBool mLoadedEnoughToPlayThrough; + PRBool mAutoplaying; + PRBool mPaused; + PRBool mSeeking; Might as well make these PRPackedBool. You might want to document any useful invariants that hold between these booleans. You don't need to initialize the decoder in two different places. Document the relationship between mAutoplaying and the autoplay attribute, in particular, clarify what the difference is between them. + double mVolume; +#if defined(SYDNEY_AUDIO_NO_POSITION) + double mPauseTime; +#else + PRInt64 mPauseBytes; +#endif + void* mAudioHandle; + int mRate; Document the units for all these The methods of nsVideoDecoder need more documentation to indicate on which thread they are called. Maybe group the methods that are called on each thread? Similarly, group the data members according to which thread they are used by. Maybe even put the thread name in the names of the methods. + nsCOMPtr mInvalidateEvent; + nsCOMPtr mDecodeEvent; + nsCOMPtr mPresentationEvent; Use nsRevocableEventPtr here so you don't have to explicitly Revoke() when these are nulled out in Shutdown()? + PRBool mFirstFrameLoaded; + PRCondVar* mFirstFrameCondVar; + PRLock* mFirstFrameLock; Explain what these protect and how the condition variable is used. + sa_stream_destroy((sa_stream_t*)mAudioHandle); static_cast (and elsewhere) + *aTime = ((double)PR_IntervalToMilliseconds(PR_IntervalNow()))/1000.0 - mPauseTime; + *aTime = static_cast(((bytes + mPauseBytes) * 1000 / mRate / (sizeof(short) * mChannels))) / 1000.0; Constructor-style-cast + nsChannelReader* me = (nsChannelReader*)handle; static_cast (and elsewhere) + Presentation Thread + This thread goes through the data decoded by the decode thread + and displays it if it is video, or queues it for playing if it + is audio. It owns the audio device - all audio operations occur + on this thread. It doesn't really display it, right? We just queue frames to be picked up by the main thread or something. I think all the comments here should go in nsVideoDecoder.h. + As a final step of the first frame event is to notify a condition + variable that we have decoded the first frame. This doesn't quite parse. +// An Event that uses an nsVideoDecoder object. It provides a +// way to inform the event that the decoder object is no longer +// valid, so queued events can safely ignore it. +class nsDecoderEvent : public nsRunnable { How about adding a RunWithLock method that subclasses have to override, so that the "lock ... run ... unlock" pattern of Run() can be shared in the superclass? Specify which thread each event runs on. + if (!mRGB) { + mRGBWidth = y_width; + mRGBHeight = y_height; + if (mRGBWidth < MAX_VIDEO_WIDTH && mRGBHeight < MAX_VIDEO_HEIGHT) { + mRGB = new (std::nothrow) unsigned char[mRGBWidth * mRGBHeight * 4]; + } + } + else if (mRGBWidth != y_width || mRGBHeight != y_height) { + mRGBWidth = y_width; + mRGBHeight = y_height; + if (mRGBWidth < MAX_VIDEO_WIDTH && mRGBHeight < MAX_VIDEO_HEIGHT) { + mRGB = new (std::nothrow) unsigned char[mRGBWidth * mRGBHeight * 4]; + } + else { + mRGB.forget(); + } + } This could be simplified to avoid the code duplication. + PR_Lock(mLock); + if (mElement) { + nsCOMPtr metadataLoadedEvent = + NS_NEW_RUNNABLE_METHOD(nsVideoDecoder, this, MetadataLoaded); + + if (metadataLoadedEvent) { + NS_DispatchToMainThread(metadataLoadedEvent, NS_DISPATCH_NORMAL); + } + } + PR_Unlock(mLock); Why do you need mLock and the mElement check here? + PR_Lock(mLock); + if (mElement) { + nsCOMPtr frameLoadedEvent = + NS_NEW_RUNNABLE_METHOD(nsVideoDecoder, this, FirstFrameLoaded); + + if (frameLoadedEvent) { + NS_DispatchToMainThread(frameLoadedEvent, NS_DISPATCH_NORMAL); + } + } + PR_Unlock(mLock); Ditto +void nsVideoDecoder::ElementUnavailable() +{ + // Need to ensure that we are not using mElement in the invalidate + // thread before setting it to null. + PR_Lock(mLock); + mElement = nsnull; + PR_Unlock(mLock); +} The comment doesn't make sense to me. InvalidateFrame is only going to be called on the main thread (right?), and so is this, so this can't be interleaved with InvalidateFrame(). + PRInt32 millis = static_cast((mVideoNextFrameTime-audio_time) * 1000.0); + PRInt32 millis = static_cast(mFramerate); Constructor-style cast It seems that the patch currently tries to play all audio and video tracks in the file. It should probably play just one audio and one video track. You can use nsAutoLock all over the place. + if (static_cast(mLastFrameSize.width) != element->GetVideoWidth() || + static_cast(mLastFrameSize.height) != element->GetVideoHeight()) { Constructor casts +nsVideoFrame::Reflow(nsPresContext* aPresContext, + nsHTMLReflowMetrics& aMetrics, + const nsHTMLReflowState& aReflowState, + nsReflowStatus& aStatus) indentation +#if 0 + + virtual nsSize ComputeSize(nsIRenderingContext *aRenderingContext, + nsSize aCBSize, nscoord aAvailableWidth, + nsSize aMargin, nsSize aBorder, nsSize aPadding, + PRBool aShrinkWrap); +#endif delete? Overall I think this is looking pretty good, I think the architecture is nicely nailed down. It would be good to keep a list of known issues somewhere, probably as a tracking bug.