Remember to test performance!

When you write new code, and the execution time of this code is dependant on the number of records to process, you must ensure that the time grows proportionally to the number of the records. In many cases, it is not obvious how slow some algorithm is on large sets of input data.

This is an example of the X++ job. You can use it to understand how well your algorithm does from the performance point of view.

static void testPerformance_Job(Args _args)
{
    #define.Limit(128)
    #define.ExecutionTime(1)
    int elements;
    int time;

    void doSomething_LinearTime(int _elements)
    {
        sleep(_elements * #ExecutionTime);
    }

    void doSomething_QuadraticTime(int _elements)
    {
        sleep((_elements * #ExecutionTime) * (_elements * #ExecutionTime));
    }

    elements = 1;
    while (elements <= #Limit)
    {
        time = WinAPI::getTickCount();
        doSomething_LinearTime(elements);
        //doSomething_QuadraticTime(elements);
        time = WinAPI::getTickCount() - time;

        info(strFmt("Elements %1, time %2 ms, %3 elements/ms",
                    elements,
                    time,
                    time == 0 ? 0 : elements / time));

        elements = elements * 2;
    }
}

The idea is pretty simple: you increase complexity in a fast manner and check how many elements get processed per millisecond. The job above results in the following:

Elements 1, time 0 ms, 0 elements/ms
Elements 2, time 0 ms, 0 elements/ms
Elements 4, time 0 ms, 0 elements/ms
Elements 8, time 16 ms, 0,50 elements/ms
Elements 16, time 15 ms, 1,07 elements/ms
Elements 32, time 31 ms, 1,03 elements/ms
Elements 64, time 78 ms, 0,82 elements/ms
Elements 128, time 125 ms, 1,02 elements/ms

So, the "complexity-to-time" function is linear (ignore the lines with zero execution time). Cool. Now, let's check another method's performance:

        //doSomething_LinearTime(elements);
        doSomething_QuadraticTime(elements); 

The following execution log is printed: 

Elements 1, time 0 ms, 0 elements/ms
Elements 2, time 0 ms, 0 elements/ms
Elements 4, time 16 ms, 0,25 elements/ms
Elements 8, time 62 ms, 0,13 elements/ms
Elements 16, time 265 ms, 0,06 elements/ms
Elements 32, time 1030 ms, 0,03 elements/ms
Elements 64, time 4087 ms, 0,02 elements/ms
Elements 128, time 16380 ms, 0,01 elements/ms

Here, the number of processed elements per millisecond drops. Imagine how many years it would take to process 1.000.000 elements with this bad performance? About 32.000 years. And for the first, linear algorithm, it would take only 12 days.

So, whenever you are in doubt about how fast your algorithm is when number of elements to process is growing, write this kind of job. It will immediately reveal problems with the performance.