Cost of array bounds checking -- one experiment

Sometimes people ask me what the overhead of array range checking is. A while ago I did a quickish experiment to measure the cost of the bounds checking introduced by the JIT in a typical application. This is just one part of the managed safety net but it's a fairly easy one to get a handle on compared to some of the other overheads. Notably, it's a lot trickier to get a handle on the marginal cost of explicit range-checking done in helper methods in base classes.

The test case was a Windows Presentation Framework application that basically tried to render the same thing that you might see on the MSN home page. The time being measured is from process startup (t=0) until Main exits (which is almost but not quite everything). This is a wall clock time on a normally configured machine so there is noise in the system which I attempt to remove as shown below.

To faciliate this experiment I used a then recent build of WPF, which was built using PD3 of the CLR (so this is 2004 vintage). I modified mscorjit.dll so that it does not introduce array bounds checks for arrays or strings. This modified jitter was built using a slightly different process than the official build (I did a standard dev build) so to control that difference I also rebuilt the standard jitter, unmodified, in the same way. So in this experiment I'm comparing two jitters both build by me in the same way. To reduce the impact of normal machine noise in the runs I removed the 10 slowest run --but I show the full data as well.

Times were gathered with everything was hot in the disk cache.

Time Results:

Test

N

Average

Std. Dev.

Standard All 36 1.661s 0.275s
Standard Slowest 10 Removed 26 1.506s 0.054s
       
Norange All 36 1.609s 0.574s
Norange Slowest 10 Removed 26 1.497s 0.017s
       
Delta All 36 0.052s  
Delta Slowest 10 Removed 26 0.009s  
       
% Delta All 36 -3.1%  
% Delta Slowest 10 Removed 26 -0.6%  

But notice that in both cases the delta is within one standard deviation of the mean. Based on this experiment we could not reject the null hypothesis that range checking is making no difference in execution time (i.e. noise is making a bigger difference).

Space Results

Working set numbers were very unstable due to the interactive nature of the test case, the variation from run to run is even more pronounced in terms of working set than it is in terms of time and I strongly feel that this variation totally hides any meaningful analysis of workingset. However it's easy enough to see the absolute difference in terms of size of the assemblies on disk which I think accurately accounts for the space tax.

Assembly Checked Unchecked Delta
mscorlib.ni.dll 9,777,152 9,703,424 -0.75%
System.Drawing.ni.dll 1,585,152 1,581,056 -0.26%
System.Security.ni.dll 901,120 897,024 -0.45%
System.ni.dll 6,713,344 6,668,288 -0.67%
System.Windows.Forms.ni.dll 12,132,352 12,099,584 -0.27%
Others (similar, not shown) ... ... ...
Total 73,719,808 73,359,360 -0.49%

So basically, at least in this experiment, the cost was within the noise level of the test. Of course your mileage may vary but this seems to agree with anecdotal experience in larger applications.