When you decide to travel at the speed of light, you have to accept the consequences

One of the challenges of developing Windows 8 for ARM devices is that, well, for a long time, there were no Windows ARM devices to test on, which made it more difficult to verify that the designs were suitable and that performance was acceptable. Eventually, the devices started to materialize, and it became possible to run Windows on the ARM devices to see how well things actually worked.

Naturally, the devices that initially showed up were not the final "ready for market" devices. After all, the hardware companies were busy developing the devices at the same time the Windows team was developing the software to run on them.

One of the problems in the earlier builds of the devices was that the power management drivers and firmware were not very reliable. The devices would sometimes drop into low-power mode even though they were in active use, or they would shut off the touch sensor due to inactivity and be slow to power it back on when the user touched the screen. These are the sorts of glitches you expect from something that is still under development.

One of the tricks we used to isolate these firmware and driver glitches was to apply a configuration we nicknamed the speed of light. In this configuration, all the power management features were disabled, and the machine ran at full power all the time, never shutting off anything. The idea here is that the speed of light was the absolute maximum performance you could get from the device, in the same way that in physics, the speed of light is the absolute maximum speed at which an object can travel.

Now, in practice, the speed of light performance would not be achieved, but it at least let us know when a performance problem could be blamed on hardware (and therefore could be improved by future iterations of hardware), or whether it was fundamentally a software issue. For example, if calculations took too long under the speed of light configuration, then you knew that you had no choice but to do some software optimization of the calculations because you're already pushing the hardware to the maximum. The hardware isn't going to save you.

I was amused by one piece of email that was sent to the performance team regarding the speed of light configuration.

Hi, I followed the instructions to set up the speed of light configuration, and I found that it crashes after about ten to thirty minutes of continuous use.

The speed of light configuration is totally unsupported. Since all the power management safeguards were disabled, the thermal sensor that said "Whoa, I'm running really hot! Could you drop to low power for a while?" was being ignored. A common side effect of the speed of light configuration was random crashes due to overheating.

Such are the dangers of travelling at the speed of light.

Comments (24)
  1. pc says:

    Is it Speed of Light Week on The Old New Thing?

  2. Ted M says:

    Throughout the history of Windows, what RISC/alternate architectures did the team try building for (even experimentally)? I know CE was built for MIPS in addition to traditional x86.

    1. MIPS was also a target architecture for Windows NT, albeit briefly. There was also the Intel i860, which was used for testing the HAL layer in Windows NT but never saw a formal release. Not to mention the Alpha AXP architecture, which NT supported for awhile, as well as the PowerPC architecture. And of course there was Itanium, though that wasn’t a RISC architecture in the least.

      Windows CE supported several RISC architectures, including ARM. Not going to bother listing them here, but Wikipedia’s got most of them covered.

      1. parkrrrr says:

        Does the HAL layer help you enter your PIN number at the ATM machine?

        1. Antonio Rodríguez says:

          It depends. Is it the HAL from 2001: A Space Odyssey, or the HAL9000 from Kung Fury?

        2. alegr1 says:

          Only when it runs on that new NT technology.

          1. Muzer says:

            N-Ten Technology? Sounds OK to me…

          2. parkrrrr says:

            Very nice. Thank you for the flashback to the XP startup image.

          3. parkrrrr says:

            By which I of course mean the Win2K startup image. The drugs must be working.

    2. ZLB says:

      Theres a book called Showstopper. (ISBN: 978-0029356715) its a very good read and covers the development of Windows NT.

      IIRC they initially targetted Intel RISC chips (i840 and i960) and avoided x86 so that platform specific stuff didn’t creep in.

      1. Antonio Rodríguez says:

        It’s a terrible book. It tries to tell the story of the development of NT, and that story would be void without technical details. The author tries to explain to a general public the complex concepts of modern OS architecture, concepts which he clearly doesn’t understand. I doubt it will be easy to read for non “computer people”, and its lack of concretion is certainly unsatisfying for engineers and computer historians.

        1. Tilmann Krüger says:

          I know exactly what you mean. But it is a good read anyways, because you get to know some of the key people in the development of NT better.

          For myself, I soved the problem by reading the first edition of “Inside Windows NT” (H. Custer, Microsoft Press, 1992) alongside. It provides the technical detail where “Showstopper!” falls short.

  3. Joshua says:

    I’m kind of amazed the speed of light setting ignored thermal warnings. Oh well. If it had actually made it to release it would have been the thing I would set on a server.

    1. Antonio Rodríguez says:

      The response to thermal warnings is putting the processor in a lower consumption mode. Which relies on power management drivers. So, when you don’t trust power management, you can’t afford to switch power modes, and you are unable to respond to thermal warnings.

    2. morlamweb says:

      @Joshua: I’m not surprised in the slightest. The point of “speed of light” modify is to run the machine as fast as possible for debugging purposes. The typical response to a warning from a thermal sensor is to throttle down the CPU in an attempt to lower the thermal load. If “speed of light” were to throttle the CPU in response to those warnings, then what would be the point of it in the first place?

      1. Joshua says:

        I’ve seen hardware that will fry on ignoring this.

        1. It wouldn’t surprise me if they did fry their prototypes occasionally. The difference is that it’s prototypes, not consumer hardware, unlike a certain phone manufacturer that shall not be mentioned. ;-)

          1. Erkin Alp Güney says:

            It can be mentioned as the product has long been withdrawn.

        2. Brian says:

          Harking back to the “You’re not debugging hard enough” story from earlier in the week. Frying hardware from software (which I did once, a very long time ago) is a sign that you are trying hard enough. There’s a reason why the first, top-level test that you do has traditionally been called a “smoke test”.

        3. morlamweb says:

          @Joshua: if I could emphasize “for debugging purposes, I would, but I’m limited by the text input box for comments. Frying hardware is an occasional consequence of debugging, and it’s a risk you take should you ignore those warnings. It’s also a damned good reason why it should NOT be used in production, including for servers. I suspect that “speed of light” was intended for, and primarily used for, short-term debugging.

          1. It wasn’t even for debugging purposes. It was for performance evaluation. You enabled the speed of light configuration, ran your scenario, and then turned it off. You then checked whether your scenario had acceptable performance. If not, then you had to go fix it in software because you know that the hardware is already giving you all it got.

  4. Jim Lyon says:

    I remember during early development of Windows phones, our solution to the overheating problems. You would get two cans of soda from the fridge, set one on the desk, set the phone on top of that, and set the other on top of the phone. You were then good for an hour or so.

    We probably had the only liquid-cooled phones on the planet.

  5. cheong00 says:

    If those supplied devices are “naked machines” (without the protective case), maybe applying a little heatsink or heatpipe will help.

Comments are closed.

Skip to main content