Forgive my digression, but I need to lay some digital signal processing (*DSP*) groundwork for what I want to talk about next. If you're already a DSP guru, then you may want to skip this one. (If you're a DSP guru, what're you reading a blog called "Audio Fool" for anyway?)

As you know, a physical audio signal is just a sound pressure wave that increases and decreases pressure with time. Any time you measure, there is a level, and that level can move up and down an arbitrarily small amount. This signal is continuous, or *analog *in both time and in level (in as much as quantum physics lets anything be continuous).

The analog nature of sound causes a problem when you want to put the signal into a computer. Computers only understand numbers. So to get a signal into a computer, you have to represent it as a series of numbers. The way to do this is to measure the level of the signal several times per second and keep only the levels at that time. Once you're done, you end up with a list of snapshots that represents the original signal. This process is called *quantization*. Each of the numbers that you store is a *sample*, and the number of times per second that you collect a sample is called the *sample rate*.

There's another problem with digitizing audio. The continuous analog wave can have any real value, with any number of digits of precision. The precision of a number in a computer is limited by the amount of memory you use, so you have to round off the sample value when you read it. The precision of a digital sample is usually expressed as number of *bits per sample*. One bit is simply "on" or "off". Each extra bit doubles the number of levels you can have. Common bits-per-sample values for digital audio are 8 (256 values) and 16 (65536 values), especially handy because they're exactly one and two bytes respectively. More bits per sample gets you better precision, which leads to a more accurate representation of the source signal, which leads to better audio. But how many is enough? Is more always better? That's the topic for the next post.

There’s actually two distinct ways of getting a digital sample from an analog signal – one is to average all the analog input across the entire sample space, and one is to just grab a single point. Older "analog" video cameras use the former, and digital cameras use the latter. However, in both cases you have the same problem: because you’re not getting absolutely everything across the entire spectrum, you cannot reproduce the original signal exactly.

COme to think of it, both of these are just one way, really – the "single point" thing is pretty well guaranteed to /not/ be a single point, so the only difference is the ratio of sampling time to dead time in a sample.

My father constantly complains about the use of digital cameras in sports, in particular racing, because it looks like the car is jumping.

Vorn

This is not nearly the problem in audio that it is in slow-mo video. In a sampled analog signal, all of the frequencies up to half the sample rate are accurately represented. The information you lose by sampling is in frequencies greater than half the sample rate.

If you sample audio at 48kHz, then you have an accurate representation of everything below 24kHz. The information you lose is out-of-band.

PingBack from http://xosfaere.wordpress.com/2007/07/06/digital-audio-series/