If you write your own DirectShow based Audio Renderer and want it to work inside WM Player 11 don’t derive it from CBaseRenderer. If you use CBaseRenderer, A/V playback in WM Player may hang. Unfortunately it doesn’t hang all the time. The hang usually manifests after seeking. So at first glance everything is working fine but after your tester gets hold of it they will inevitably find the hang. The real bummer is that you don’t see this problem using GraphEdit. If you are like me you will use GraphEdit to develop your custom filters rather than WM Player delaying the visibility of this issue.
Interestingly this issue does not occur when you have an audio only graph. In an audio only graph if you write your own CBaseRenderer based audio renderer everything works fine. The issue only occurs when you have an audio and video in the same graph. Also this issue only occurs with certain source filters such as the ASF source filter so not all media file types suffer from this issue.
After spending about 8 hours in WinDbg digging through the WM Player 11 source code I was able to confirm that if you derive your custom Audio Renderer filter from CBaseRenderer there is a strange interaction between this base class and WM Player. The issue turns out to be an exotic deadlock between WM Player and the scheduling components in CBaseRenderer. Timing is a critical factor for this issue to occur obscuring the problem. The issue only occurs when you transition from the “paused” to the “run” state in the DShow graph. Once the graph is running the deadlock won’t occur.
The issue is due to a critical section that gets locked in WMP when it sends the first frame of audio to your renderer on the graph’s streaming thread. On this thread the render engine in CBaseRenderer responds to this initial frame of data by going into a wait state. This wait state waits for the graph to transition into “run” state. The transition to the “run” state occurs on the “UI” thread. Remember that critical section that we initially took in WMP on the streaming thread? Well WMP now waits on this critical section in the UI thread. Since we are waiting in the streaming thread for the transition to the “run” state and we are waiting in the UI thread for the critical section to be released, we get into a deadlock situation.
This doesn’t appear to be a bug in WMP like you would expect. WMP’s use of critical sections is probably legit. Most of the time we must make sure that the streaming thread isn’t processing data before we do our state transition. Unfortunately the way the CBaseRenderer is written, we wait infinitely for the state transition to occur. It never occurs and so we hang.
The best way to resolve this issue is to not derive your filter from the CBaseRenderer class. The name of the CBaseRenderer class is very deceptive. I expected this class to be a class that is generic and it should allow me to use it with any type of renderer that I want to build, right? Wrong. After talking to some of the guys that originally wrote the base classes they confirmed that the CBaseRenderer class is very specific to rendering video. As you know the Video renderer is responsible for blocking and unblocking the streaming thread. Each sample is scheduled for playback, and we block the streaming thread until the next frame is scheduled to be displayed. While we are waiting for the next frame to be displayed, we block the streaming thread, stopping the flow of data in the graph. How about for audio?
Do we need to block and unblock the streaming thread for each frame of audio data? At first look I would certainly expect the answer to be a resounding “yes”. I would expect that we would want to schedule our audio samples the same way as our video samples. But does this really allow us the highest level of audio and video synchronization? It would actually be better to allow a single filter in the graph to do our sample scheduling and drive our streaming thread. And this is exactly what WMP expects. In this instance WMP expects that the video renderer is driving the streaming thread and the audio renderer is just along for the ride, rendering the samples as it receives them.
So there is a simple one line fix to the CBaseRenderer code. I don’t recommend it because there may be other issues with using the CBaseRenderer code as an audio renderer that we haven’t found yet. If you wan’t to do this right, do it like the Microsoft DirectSound Audio renderer does and derive from CBaseFilter. That said, you can actually fix the issue by removing the wait in the CBaseRenderer class after the first frame of video is delivered. I wouldn’t recommend changing the actual CBaseRenderer code but rather override the “Receive” call in your derived class, copy the code from the base class into your overridden function and comment out the call to “WaitForRenderTime”. That’s it. Everything magically starts working after this.