Scripting Type Library Constant Injection Performance Characteristics, Part Three

We’ve got a system that meets all our needs for disambiguating hierarchical constants through partial or full qualification. (In the event of a collision between an enumerated type name and a value in a given library, the type name wins so that full qualification can work.) But what about the performance costs compared to declaring constants the old fashioned way?

There are two costs to consider: the per-use runtime cost, and the page startup runtime cost. Let’s look at the per-use cost first. When you inject

typelib Cheese {
  enum Blue {
    Stilton = 1
    Gorgonzola = 2

and then have a line of code that says = Stilton

in ASP, what happens? First VBScript searchs the top-level global named items for a match for

Stilton — we check Response, no, that’s not Stilton. We check Server, Request, all the rest of the ASP objects, and do not find a match. Finally we check Cheese. None of these are Stilton.

Then we check the default dispatch objects. The ASP objects are not added with the “qualification is optional” flag, so we skip them. We do however ask

Cheese if it has a Stilton property, and it does.

So far at runtime we’ve done one table search on the global nametable and one on the

Cheese object. Had there been many type libraries injected we would have potentially had to search all of them as well looking for the property. (We cannot compute a fast lookup table for these “second level” names because the named items might be expando objects that change without warning.)

The hash tables are pretty fast, but since the hash tables are case-insensitive, the lookup also involves hashing and canonicalizing the string for every lookup.

And of course, so far we’ve just determined what property on what object corresponds to this name. We then have to do a property fetch on that name to get the value.

Declared constants, by contrast, are not looked up at all. When you say

Const Yellow = 3 Yellow

the compiler determines that

Yellow is a constant and generates the same code as if you’d said 3. The lookup cost at runtime is zero for constants, compared to a potentially large number of hash table lookups plus an IDispatch Invoke call for the typelib case.

Now it should be clear why the runtime cost is much higher for type library constants compared to lexical constants. Now consider the startup cost.

When the script first runs, space has to be reserved for all the constants. Their names and values need to be stored. (We need to store the names in case someone uses Eval to evaluate an expression containing a constant; the compiler spun up by Eval will not know that the original compiler for the program had defined any constants.) This is no more expensive than any other local variable assignment with a literal on the right hand side. At startup time, every constant is just another global variable as far as the bytecode interpreter is concerned. (Of course at runtime, the interpreter ensures that attempts to write to these global “variables” fail! They are variables that don’t vary.)

That’s a pretty small startup cost. Not zero, by any means, but not very big.

Now consider the startup cost of initializing the type library objects. If you read in two type libraries with a dozen enumerations, each with say, five properties, then you end up constructing 26 objects, two with 72 properties and 24 with five properties. We have to parse the type libraries to do so, and we do so every time you serve the page.

(I added an optimization to this code which I do not remember if IIS uses or not. The objects we build are free-threaded because they are read-only hash tables. There is a special interface which allows you to parse the type libraries once and re-use the generated object tree in any script engine, thereby eliminating the need to regenerate it every time the script starts up. I think though that IIS does not use this feature.)

The startup cost is enormous for type libraries compared to constants. With constants, the compiler does the work once and the ASP engine caches and reuses the compiled state over and over again without doing a recompilation. With type libraries, every single time we grovel through the type libraries searching for enumerations. Those type libraries can be huge!

Type library groveling is a convenience feature and was not designed for speed. I have never measured the relative speeds, but I’m surprised by the claim that it is only two orders of magnitude slower. I would have expected more – using a constant at runtime is effectively zero cost. We could probably easily come up with scenarios where the throughput of simple pages was thousands of times slower.

Of course, realistically, you have to ask yourself not “what is faster?” but rather “what is fast enough?” and “what is the slowest thing?” Most pages do so many object accesses that these costs are a drop in the bucket. What are a few more object accesses here and there when you’ve already got thousands on a page? I do not want anyone to walk away from this with the thought that the secret to improving ASP page performance is to use constants instead of type libraries. That will certainly work if type library groveling is the slowest thing on your page. It probably isn’t, but on the other hand, it’s probably not a good idea to have one global page that includes every type library under the sun either.

Comments (5)

  1. Bob Riemersma says:

    Alas, ASP pages and even intranet client-side DHTML is not the only case where VBScript and for that matter lots of typelib constants come in handy.

    WSH scripters can cope due to the <reference> tag available in .WSF format script source files. Then again, few WSH scripts engage in truly long-lifetime multiple-use activities. Here you still get the typelib penalty with little payback… but in such cases performance isn’t paramount anyway. The odd long-running WSH script still gets the benefit.

    Where the developer is truly in a bind is with HTAs. Since there isn’t any clean support for loading typelibs in IE, the HTA developer is just screwed. Yet a given HTA might have as long a lifetime as any compiled application run on the desktop. The execution model doesn’t resemble ASP pages or typical web pages built with DHTML. It’s a lot more like a VB forms program.

    So an HTA developer using ADO or somesuch bag of components is reduced to importing long lists of Consts… or worse yet placing magic numbers all over the place.

    The result is that both HTAs and WSH scripts (since nobody writes .WSFs) end up laced with magic numbers, resulting in confusing code and maintenance headaches. The .WSF-aversion probably stems directly from the fact that IE can’t use this script source format.

    So a design decision (no typelib support in IE) meant to protect Johnny-no-teeth means none of us can have steak. "Silly wabbit, script is for kids" seems to be the message here.

  2. James Hugard says:

    > and the ASP engine caches and reuses the compiled state over

    > and over again without doing a recompilation

    How, pray tell, can I do this wonderful thing in my own code? Recompilation is one of the most expensive things in our current script host.

  3. James Hugard says:

    > but I’m surprised by the claim that it is only two orders of magnitude slower.

    Hmm… when I originally timed the difference, I had not even considered runtime costs.

    The quoted two-orders of magnitude was for the parse-time of several hundred constants, not execution time. Since we load those constants for every script, and since we load thousands of scripts thousands of times per run, constants loading was one of the more expensive items at one point.

    On top of that, ATL in VC6 has a bug which occasionally causes simultanious type library requests to fail (I can find the KB number, if interested).

    For those reasons, I moved us to script-based constants rather then importing multiple typelibs.

    Today, our most expensive item is parse time, which runs about 4 times our average script execution time. We have thousands of lines of script include files that get parsed for every script.

  4. EricLippert says:

    Hey James,

    Use the Clone method. When you clone an engine, all the persistent compiled code blocks are shared between the original and clone, so the cost of recompilation disappears.

    I’ve got a brief description of how ASP caches script engines and threads here: