Three quarters are better than a dollar because they make noise!

-- Lilly, *Lilly's Purple Plastic Purse*

Last time I talked about how to calculate a sine sweep mathematically. There's a closed-form solution which is quite elegant but, alas, useless in a practical computer implementation due to the very big numbers that are being fed to the sin(...) function.

A computer implementation will care only about the discrete samples that need to be generated. The integral in the previous post turns out to be counterproductive - we're much more interested in the infinite sequence of *φ*_{i}, and more particularly in their residue mod 2π.

Here are Matlab scripts that implement a sine sweep both ways:

First, the naïve mathematical solution that just calculates the exponential:

function signal = sinesweep_nostate( ...

start, ...

finish, ...

seconds, ...

sample_rate, ...

amplitude, ...

initial_phase, ...

dc ...

)

% sinesweep_nostate returns a single-channel sine logarithmic sweep

% DO NOT USE THIS FUNCTION IN PRODUCTION

% THIS IS A PEDAGOGICAL EXERCISE ONLY

% THE SIGNAL THIS PRODUCES GRADUALLY DISTORTS

% INSTEAD USE sinesweep

%

% start: starting frequency in Hz

% finish: ending frequency in Hz

% seconds: duration of sweep in seconds

% samplerate: samples per second

% amplitude: amplitude

% initial_phase: starting phase

% dc: dc

% echo these interesting intermediate values to the console

R = (finish / start) ^ (1 / seconds)

B = 2 * pi * start / log(R)

A = initial_phase - B

time = 0 : 1 / sample_rate : seconds;

phase = A + B * R .^ time;

signal = amplitude * sin(phase) + dc;

end % sinesweep_nostate

Now, the iterative version that adds up the little Δ*φ*s to calculate *φ*_{i} iteratively:

function signal = sinesweep( ...

start, ...

finish, ...

seconds, ...

sample_rate, ...

amplitude, ...

initial_phase, ...

dc ...

)

% sinesweep returns a single-channel sine logarithmic sweep

% start: starting frequency in Hz

% finish: ending frequency in Hz

% seconds: duration of sweep in seconds

% samplerate: samples per second

% amplitude: amplitude

% initial_phase: starting phase

% dc: dc

time = 0 : 1 / sample_rate : seconds;

frequency = exp( ...

log(start) * (1 - time / seconds) + ...

log(finish) * (time / seconds) ...

);

phase = 0 * time;

phase(1) = initial_phase;

for i = 2:length(phase)

phase(i) = ...

mod(phase(i - 1) + 2 * pi * frequency(i) / sample_rate, 2 * pi);

end

signal = amplitude * sin(phase) + dc;

end % sinesweep

If anything, one would expect the first to be more accurate, since it is mathematically precise, and the second is only an approximation. But let's see how the signal produced by each of them holds up.

I calculate, and plot, the same sweep both ways, with a slight dc offset to facilitate seeing the different sweeps on the same plot. In particular this is a logarithmic sweep from 20 kHz to 21 kHz, over 30 seconds, using 44100 samples per second, an amplitude of 0.2, an initial phase of π, and a dc of 0.25 for one and 0.26 for the other:

plot(...

x, sinesweep(20000, 21000, 30, 44100, 0.2, pi, 0.2**5**), 'd', ...

x, sinesweep_nostate(20000, 21000, 30, 44100, 0.2, pi, 0.2**6**), 's' ...

)

(x is just 1:30*44100 - necessary to allow a single plot statement.)

Note that the approximate method is plotted in diamonds, and will be 0.01 *lower* on the resulting graph than the "exact" method, plotted in squares.

(This takes a while to plot because there are a heckuva lot of points...)

Ah, a nice solid mass of green and blue. I won't show it here.

OK, let's zoom in to a very early chunk of the signal - I expect these to match up closely, with only the dc offset separating the points:

Yup. Looks pretty good - give or take a pixel (quantization error in the display.)

Now let's see what happens to the signal towards the end of the 30 seconds.

Ick. Something's rotten in the state of Denmark. This distortion is so bad that it's *visible* - you don't even have to take an FFT to see it.

*Exercise:* take an FFT and look at the distortion.

*Advanced Exercise:* demonstrate that the distortion is, in fact, in the "exact" method and not in the iterative method... that is to say, show which clock is right.

EDIT: On second glance it looks like the second picture is just showing a horizontal shift between the two signals, which is expected. I'll need to dig deeper if I'm to prove my assertion that the iterative method produces less distortion than the "exact" method.

Are you sure it causes distortion? Or does it just cause inaccuracy in frequency? Maybe you need to take that FFT after all…

The source of the… not-quite-right-ness… of the audio is here:

phase = A + B * R .^ time;

signal = amplitude * sin(phase) + dc;

As R gets larger and larger, bigger and bigger numbers are fed to sin(). Floating-point types get less and less accurate as they get bigger and bigger and the decimal point gets moved farther and farther to the left.

I'll take another crack at demonstrating this effect experimentally.

Thanks for these posts – this one and the 'bad approach' post, it's been quite helpful.

I'm not too sure if you can really describe the method above as iterative, however.

I'm presently trying to figure out how to do something similar, but using an exponential function to decrease the time interval between successive outputs of +/-1, which are then joined up with haversines in a controller, to provide a sinsweepish for recording the frequency response of a system. Trouble is, the controller can only deal with 400 points, call this n, then my problem is figuring out all the relationships between t_total, w_start, w_end, deltaT(t) and n etc. I thought it would be quite straightforward, hoho, but I think that the problem is overconstrained, by trying to define the test duration AND the frequencies AND the number of data points….

All good head scratchy stuff.

Beautiful! You should mention, starting off with initial phase, pi/2, a sweep is generated in marvelous quadrature to init. phase 0. Working stiffs could use that, I was one long, long ago.