More WMP Play, Part 2

In Part 1, I described some considerations for implementing a phase-space visualizer. Now I’ll dig deeper into the theory and implement the visualizer.
The biggest question is how to produce interesting patterns using an input signal with a spread spectrum. By “spread spectrum,” I mean a signal that comprises many frequencies, which is typical of music. By “interesting” I mean patterns that are aesthetically pleasing, which may imply symmetry and some clear correlation with the music. It’s useful to look at two extremes: pure tones and colored noise.
A “pure tone” is made of a single sine wave. In orchestral and band instruments, the flute and chimes produce pure tones. The spectrum of a pure tone is a single impulse, with no “tails” on either side, which means the spectrum is discrete. All of the energy of such a sound is propagated by this one frequency. Resonant cavities often emit pure tones (think of blowing across the mouth of a glass bottle). Waves emitted by resonant cavities have harmonics at integral frequencies, so they emit sound that is “clean” to the human ear.
A “colored noise” has a spread spectrum, which means that many sine waves are present, beating against each other. Snare drums and cymbals produce spread spectra. The energy is distributed among many frequencies, and the spectrum is continuous. Waves on membranes and disks follow a Bessel function, with a non-integral distribution of harmonics, so they emit sound that is “noisy” to the human ear.
For the purposes of the visualizer, more pleasing patterns result from a combination of pure tones. Instead of using raw music as the input signal, the visualizer synthesizes a signal from several discrete frequencies. This synthesized signal is the input to the phase-space visualizer.
Synthesizing the signal isn’t as hard as it sounds. The spectrum is really just a collection of sine waves of various frequencies and amplitudes. Windows Media Player provides the spectrum as an array. The visualizer picks a frequency (actually, a narrow band of frequencies) from this array and creates a corresponding sine wave. The amplitude of the sine wave is determined by the value of the array at index i.

Here’s what the WMP SDK documentation says about the spectrum provided by WMP:

Provides a stereo snapshot of the frequency spectrum of the audio data at a time specified by the Plug-in Manager. It can be used for frequency spectrum effects such as real-time analyzers. The frequency value of the first cell is 20 Hz, and the frequency value of the last cell is 22050 Hz.
The frequency array is a two-dimensional array. The first dimension of each array corresponds to the stereo audio channel (left or right), and the second corresponds to the frequency levels (in bytes) of the snapshot, where the audio spectrum is divided up into 1024 regions.

This means that each element of the array sums the energy of about 22 individual frequencies:

(22050 – 20 frequencies ) / ( 1024 regions ) = 21.51 frequencies / region.

This isn’t terribly high fidelity, but it’s good enough for our purposes.

To synthesize the waveform that the visualizer displays, I pluck out about 20 regions that span the whole spectrum. For each region, I create a sine wave sample that’s scaled by the amplitude reported in the TimedLevel array. The final signal is the sum of these individual signals. For each frame of the animation, I generate around 1000 samples, which means about 20,000 samples per frame.

for( int j = 0; j < base._frequencies.Count; j++ )


   FrequencyAmplitude fa = base._frequencies[j];

   amplitude = fa.Amplitude;

   if( amplitude != 0 )


      frequency = fa.Frequency;

      double f = (double)levels.GetFrequency( AudioChannel.Left, frequency );

      f *= amplitude;

      double angle = this._delta * ( 2d * Math.PI ) * (double)frequency * (double)i;

      totalDisplacement += f * Math.Sin( angle );



This signal is fed into the phase-space algorithm described in Part 1. To my eye, the result is somewhat more compelling than the irregular phase-space portrait from Part 1.

Phase-space visualization of synthesized waveform

You can see I’ve had some fun with WPF controls and layout. The top two controls are Slider controls that are bound to the parent UserControl’s Amplitude and Phase properties. Amplitude determines the overall gain of the synthesized waveform, and Phase determines the phase-space delay.

The “equalizer” control on the left side is a ListBox control that is bound to the parent UserControl’s Frequencies property, which exposes a collection of the frequencies and corresponding amplitudes used to synthesize the waveform. the ItemTemplate attribute on the ListBox is assigned a DataTemplate that specifies a Slider control for each frequency in the Frequencies collection. Here’s the XAML:

<DataTemplate x:Key=SliderItemTemplate>


      <ColumnDefinition Width=40/>


      <Slider Grid.Column=1 Orientation=Horizontal Minimum=0 Maximum=5 Value={Binding Path=Amplitude, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged}>


            <ScaleTransform ScaleX=0.5 ScaleY=0.5 />



      <TextBlock Grid.Column=0 Text ={Binding Path=Frequency} />





   <DockPanel Name=panel >

   <DockPanel Name=panel2 DockPanel.Dock=Left >

   <TextBlock Text=Amplitude Foreground=White DockPanel.Dock=Top />

   <Slider Name=amplitudeSlider DockPanel.Dock=Top Minimum=0 Maximum=10 Value={Binding Path=Amplitude, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged} />

   <TextBlock Text=Phase Foreground=White DockPanel.Dock=Top />

   <Slider Name=phaseSlider DockPanel.Dock=Top Minimum=10 Maximum=100 Value={Binding Path=Phase, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged} />

   <TextBlock Text=Frequencies Foreground=White DockPanel.Dock=Top />

   <ListBox Name=frequenciesListBox DockPanel.Dock=Left SelectedIndex=-1 ItemsSource={Binding Path=Frequencies} ItemTemplate={StaticResource SliderItemTemplate} IsSynchronizedWithCurrentItem=True/>


<Canvas DockPanel.Dock=Left Name=signalCanvas Width=1280 Height=1024“/ >



This equalizer control determines the mix of sine waves, which leads to a wide variety of possible patterns. For example, the following screen shot shows a pattern generated from “Genetic World” by Telepopmusik. Low and high frequencies are mixed to produce this pattern.

Phase-space visualization of “Genetic World” with low and high frequencies dominating

The following screen shot shows a pattern generated from “Genetic World,” in which high frequencies dominate.

Phase-space visualization of “Genetic World” with high frequencies dominating

Needless to say, it’s more interesting to watch the patterns when they’re synchronized with the music, but these screen shots may give you an idea of what’s possible.

All of these values can be animated to produce interesting time-varying effects. I’ll discuss these possibilities, as well as performance considerations, in Part 3.






Comments (1)

  1. I’ve been reading the online draft (pdf) of Ray Pierrehumbert’s excellent new book on climate science,