I don't quite get the argument. If my applications can't run on current hardware, I'm dead in the water. I can't wait for the next CPU.
The thing is that that's the way people have worked for the past 20 years. A little story goes a long way of describing how the mentality works.
During the NT 3.1 ship party, a bunch of us were standing around Dave Cutler, while he was expounding on something (aside: Have you ever noticed this phenomenon? Where everybody at a party clusters around the bigwig? Sycophancy at its finest). The topic on hand at this time (1993) was Windows NT's memory footprint.
When we shipped Windows NT, the minimum memory requirement for the system was 8M, the recommended was 12M, and it really shined at somewhere between 16M and 32M of memory.
The thing was that Windows 3.1 and OS/2 2.0 both were targeted at machines with between 2M and 4M of RAM. We were discussing why NT4 was so big.
Cutlers response was something like "It doesn't matter that NT uses 16M of RAM - computer manufacturers will simply start selling more RAM, which will put pressure on the chip manufacturers to drive their RAM prices down, which will make this all moot". And the thing is, he was right - within 18 months of NT 3.1's shipping, memory prices had dropped to the point where it was quite reasonable for machines to come out with 32M and more RAM. Of course, the fact that we put NT on a severe diet for NT 3.5 didn't hurt (NT 3.5 was almost entirely about performance enhancements).
It's not been uncommon for application vendors to ship applications that only ran well on cutting edge machines with the assumption that most of their target customers would be upgrading their machine within the lifetime of the application (3-6 months for games (games are special, since gaming customers tend to have bleeding edge machines since games have always pushed the envelope), 1-2 years for productivity applications, 3-5 years for server applications), and thus it wouldn't matter if their app was slow on current machines.
It's a bad tactic, IMHO - an application should run well on both the current generation and the previous generation of computers (and so should an OS, btw). I previously mentioned one tactic that was used (quite effectively) to ensure this - for the development of Windows 3.0, the development team was required to use 386/20's, even though most of the company was using 486s.
But the point of Herb's article is that this tactic is no longer feasible. From now on, CPUs won't continue to improve exponentially. Instead, the CPUs will improve in power by getting more and more parallel (and by having more and more cache, etc). Hyper-threading will continue to improve, and while the OS will be able to take advantage of this, applications won't unless they're modified.
Interestingly (and quite coincidentally) enough, it's possible that this performance wall will effect *nix applications more than it will affect Windows applications (and it will especially effect *nix derivatives that don't have a preemptive kernel and fully asynchronous I/O like current versions of Linux do). Since threading has been built into Windows from day one, most of the high concurrency application space is already multithreaded. I'm not sure that that's the case for *nix server applications - for example, applications like the UW IMAP daemon (and other daemons that run under inetd) may have quite a bit of difficulty being ported to a multithreaded environment, since they were designed to be single threaded (other IMAP daemons (like Cyrus) don't have this limitation, btw). Please note that platforms like Apache don't have this restriction since (as far as I know), Apache fully supports threads.
This posting is provided "AS IS" with no warranties, and confers no rights.