Macros, Macros, Macros

As I'm climbing the Software Factories mountain (I figure I'm at about 10,000ft of a 14,000ft climb) I'm deep in the middle of thinking about (see my last post) about Domain Specific Languages.

In my last post I observed that I'd developed a number of these beasts and I've been thinking about what worked for me in the past and why.

Those of us who go back to the days of writing non-trivial applications in assembly language remember the advent of the Macro Assembler. This wonderful beast allowed you to encapsulate blocks of code you wrote repeatedly into a single line of code. Even better than simply aggregating multiple lines, a macro assembler would allow you to parameterize these blocks. Here's a rough example: (in a pseudo PDP-11-ish way, I'll admit I've forgotten the actual syntax but I think you'll get the idea)

AddMemory .macro $arg1,$arg2,$arg3
mov $arg1,r0
add $arg2,r0
mov r0,$arg3
.end

(Yes, I know that this macro trashes r0 and that I could have added a mov r0,-(sp) at the beginning and a mov (sp)+,r0 at the end but I wanted to keep it simple to make my point.)

One would then write code like:

                           AddMemory input1,input2,output1

and hopefully, with judicious use of this kind of technology slightly raise the abstraction level of building one's application.

The problem with macros is that they often were the private domain of the macro's author and if you had to work with someone else's code you had to either (a) spend quite a while understanding all their macros and the assumptions they made (see my parenthetical note above) or (b) always get listings of the code with the macros expanded thus lowering your level of abstraction back down to the common-denominator of the standard assembly language.

Now, why, in the current day-and-age would I write about this? The reason is that as we once again try to climb the abstraction ladder, we're going to be faced with the same kinds of thought processes: codifying-and-parameterizing blocks of knowledge.

We've already done this (and we're seeing the results in both abstraction benefits and long-range-compatibility/complexity issues) with things like Windows Forms, ASP.net and (name your favorite HTML-extender-here). Now with C# 2.0 and the addition of generics and ASP.net 2.0 we'll be doing it again.

I personally think that experienced developers do this as a common practice. I recently worked on a large ASP.net application and, when faced with a complex tabbing/wizard problem, decided I would build a collection of user controls and common code to simplify building pages like the one I face. At that time there were no other pages which required that architecture but I just knew in my "bones" that more would come along (which, in fact, they did).

However, I made some, in retrospect, short-sighted decisions and my "macro" didn't scale as far as I would have thought as more stateful enabling and disabling of GUI elements, based on application state, user privs and domain rules became requirements.

If we want things like DSLs to succeed we'll have to learn how to design, document and evangelize their development and use and, of course, develop best-practice guidelines.

My point here is: When building meta-things (macros, pages, language elements, patterns, graphical design motifs, etc.) that will be parameterized, think carefully about your building blocks, make them extensible, clearly state your assumptions and don't forget about the next person who will have to work with your tools. Almost every tool will be re-used especially if you think "this one's just for me" :)