You can’t hear DC

Recently one of my team members found a bug in some old code while doing a code review.  Our application was generating a sine wave to be rendered by the audio hardware.  The sample format isn’t important except to note that it is an unsigned value between 0 and MAX = 2*FS_AMP.  The bug is…

4

More posts eventually!

It’s that time of year, it seems.  I was down with the flu last week, and I’m trying desperately to catch up this week.  I promise I’ll get more posts up soon.  I’m doing some WASAPI playback library stuff right now and I’m just dying to do a couple of articles about the new Vista…

6

Digital Audio: Aliasing

Sampling a continuous waveform into discrete digital samples results in lost information.  Discrete samples can only tell what the wave is doing at periodic instants in time, and not what’s happening between them.  The continuous sampled wave could be doing anything between samples.  We simply don’t know.  The problem here is that when we want…

2

Clipping in popular music

Aside from the distortion artifacts, one of the biggest problems that results from clipping is a loss of dynamic range.  Remember that the dynamic range of a signal is effectively the difference between the maximum output level and the noise floor.  When you clip a waveform, you lower the maximum sample value, which lowers the output level and…

6

Louder Sounds Better

Below is an example of the Fletcher-Munson Equal Loudness Curve.  It is one of the most recognized graphics in audio engineering. The horizontal axis is frequency of tones, and the vertical axis is actual sound pressure in dBSPL.  Each point on a curve has about the same subjective “loudness” to the human ear.  The low parts…

2

Audio Fidelity: Clipping

In theory, an audio signal can take on any amplitude.  There is no mathematical upper limit for how far from zero a sample can go, or how high the magnitude of a continuous wave can go.  In practice, however, a digital signal’s amplitude is limited by its number of bits, and even electrical components can…

5

The difference between measuring DR and THD+N

I’ve talked here before about how noise and distortion are very similar concepts with very different causes.  Noise is unwanted artifacts independent of the signal often caused by physical processes outside of a device.  Distortion is unwanted artifacts directly correlated with the signal usually caused by components inside of the device.  When taking measurements of these artifacts, what we aim…

1

Audio Fidelity: Output Level

Output level is one of the simplest fidelity metrics to understand, but don’t take that to mean it’s not important.  There are several occasions where you want to know the maximum, loudest value that a signal can get.  On the digital side, that’s pretty easy.  A full-scale digital signal is a waveform (usually a sine wave) which…

1

Always dither before you quantize

Quantization adds noise.  Taking a nice continuous signal and expressing it as distinct integers will introduce a round-off error, which means you’ve added random fluctuations to the signal, which is the definiton of noise.  Remember that noise is inevitable, so we just have to manage it (such as using enough bits per sample).  The problem is that round-off errors aren’t…

2

Audio Fidelity: Frequency Response

Not all frequencies are created equal.  And they’re also not generally treated equally by a digital filter.  How inequally they’re treated is one of the defining characteristics of a filter.  Audio engineers have a metric for describing this behavior.  The frequency response of a filter or system tells us the relative output signal given a known…

2