Larry: You said Macros work to hide the complexity and say so like it is a bad thing.. ? Excuse me but I thought that was the POINT of using a Macro..
Actually, in the world in which I live (writing systems programs that exist in the working set of hundreds of millions of users), hiding complexity is a very, very bad thing.
You need to be VERY careful whenever you do something that hides complexity, because it's likely to come back and bite you on the behind. I wrote up this story back in April, but it bears repeating:
Way back in the days of MS-DOS 4.0, I was working on the DOS 4 BIOS, and one of the developers who was working on the BIOS before me had defined a couple of REALLY useful macros to manage critical sections. You could say ENTER_CRITICAL_SECTION(criticalsectionvariable) and LEAVE_CRITICAL_SECTION(criticalsectionvariable) and it would do just what you wanted.
At one point, Gordon Letwin became concerned about the size of the BIOS, it was like 20K and he didn’t understand why it would be so large. So he started looking. And he noticed these two macros. What wasn’t obvious from the macro usage was that each of those macros generated about 20 or 30 bytes of code. He changed the macros from inline functions to out-of-line functions and saved something like 4K of code. When you’re running on DOS, this was a HUGE savings.
Because the macro hid complexity, we didn't realize that we'd hidden a huge size problem in one line of source code. When you're dealing with an OS that runs on machines with 256K of RAM, that hidden complexity is a big issue.
The usual reaction that people usually have to this story is "What's the big deal. That was 20 years ago, you idiot, machines now come with gigabytes of RAM, nobody cares about that stuff."
But they're wrong. The same issue shows up even on today's machines, it just shows up differently. On today's machines, the bottleneck isn't typically the amount of physical RAM on the machine, instead, the bottleneck is in time to read an image from the disk.
The thing is that the speed of RAM is limited by the speed of light, and the laws of quantum physics, but the speed of a hard disk is limited by the physics of large objects. As a result, reading data from RAM is blindingly fast compared to reading data from disk (no duh).
Every page that's faulted into memory while loading your application adds between 10 and 50 milliseconds (or more, if the machine's got a slow or fragmented hard disk) to the boot time of the system. The larger your code, the more time it'll take to load it into RAM.
If the user can't use the system until your code's been loaded into RAM, then you just kept them from doing their work. And that's the world in which I live - the system can't log the user on until the audio subsystem's loaded (at a minimum to play the logon chime), so it's really important that my code come in as quickly as possible.
We have teams of engineers in Microsoft whose sole job is to profile the disk boot process of Windows - they investigate literally every read from the disk that occurs during the boot process to ensure that it's necessary. If a components code is taking "too many" disk reads to load, then these guys will come down like a ton of bricks on the poor developer responsible for that code.
So it's CRITICAL that I know where all the code in my application is going, and what it's doing. If I've hidden complexity in a macro, or a templated function, then I may have many pages of code hidden behind a single line of source.