Is your battery dead?

Popular Science had a good article in the Oct 2004 issue about battery life.  I always knew this was a problem, but I assumed that the solution was just around the comer… But it seems we are 10 or so years away from a real solution all the while laptops and other consumer electronics are drawing more and more power.  Since 1990 hard disk capacity, CPU speed and RAM size have all grown at or better than Moore’s Law.  But power density in batteries has hardly improved at all.  I am amazed that we have been struggling with this problem for so long with real no breakthroughs… Sure there is some hope for the Fuel Cell, but it is years from being practical.  What should we do in the meantime?  It sounds like Intel is looking at turning off parts of the hardware that is not used… that is cool… but what should we do in software?  Should we keep building the best, most powerful software we can and assume the power situation will work itself out or should we constrain the cycles (and therefore power) usage at the risk of giving “less-good” experiences?    I suspect that right now this isn’t seen as a huge issue, but as the trend continues I worry that power usage will become the key limiting factor in both hardware and software design.  Before the 1970’s oil crisis people didn’t full internalize the importance of a tank of gas either…now we have the strategic oil reserves and other policies to mitigate the risk.  Will similar realization happen about battery life?


Comments (9)

  1. The PSP is having a similar issue right now, with certain instructions and tasks using more power than others.

    To be honest with you, I think that CLR-targeted apps could potentially be a place where power management vs. performance could come into play.

    If you are currently plugged in, the JIT could go whole hog, utilizing the most power-hungry instructions and operations that it can in order to improve performance.

    If you are currently running on battery life, the JIT could optimize with an eye towards battery life…adding in additional wait states, changing default compression levels to less-processor-intensive algorithms, possibly even having the plugged-in JIT pre-JIT a "power-lite" version of each assembly so that the JIT doesn’t even have to be there.

    Just a thought…

  2. sbjorg says:

    More shallow programming abstractions would reduce the total number of instructions required to execute a task. Alternatively, enabling partial evaluation, which has been common place for decades in functional programming, would provide much needed relief. Sadly, for years the response to more efficient code has been: Moore’s law will negate any immediate benefits next year. Only high-performance apps went through the trouble of actually understanding where the cycles were spent.

    I’m glad to see that now we are reaching the end of the road. Hopefully this will fuel much needed innovation in the programming paradigm. We have been biased too long towards generalization, rather than specialization. It had its reasons, but today’s needs will hopefully make us revisit this bias. A good example of what can be done is found by looking up MIT’s exokernel on google.

  3. I like Michael Russell’s idea. I think if the CLR and JIT were able to make use of Celeron specific instruction sets on laptops for power saving/power consumption, that is a big step forward. Same thing with any mobile device.

  4. "the best, most powerful software we can" should also be the tightest, least bloated software. This idea was very popular back in the late 70s and early 80’s, when only the smallest, tightest code was really useful on microprocessors. Now the most common attitude is "bloat like crazy; Moore’s law will clean up." It is possible to have tight code that is powerful as well, it just takes much more thinking and work, as was done the the old days of tiny storage. Managed code makes it easy to write bloated libraries with ease. I also hope the power problem pushes back on the trend toward overengineering and overabstraction.

  5. Back when I still used to be involved in the electronic side of things as well as software, bus cycles were one of the really expensive things from a power budget point of view. Do you know if this is still the case?

    What I mean is this: are the differences made to power consumption by executing this instruction vs that one remotely significant in comparison to optimisation techniques that enable more effecient use of the cache to be made, thus reducing the number of external bus cycles required to get the relevant piece of work done?

    I would expect (but I’m quite prepared to believe that I’m way out of date here) that careful use of memory is likely to outweigh careful instruction selection for two reasons: (1) this is already one of the most significant things you have to get right to optimisze for speed, and the faster your code gets its job down, the sooner you can put everything back into a low-power idle state, and (2) external bus cycles are expensive because you have to make signals go all the way along whacking great big copper tracks between the chips, rather than merely getting from A to B on a single wafer.

    It this is right, then I would have thought that all the advice that Rico gives on how to write code that performs well will turn out to offer power consumption benefits too.

  6. Mike Dimmick says:

    The main thing I can offer from three years of handheld software development: always block, never poll. Give your processor the maximum opportunity for sleep, which helps power consumption and cooling.

    Keeping your data colocated in memory will probably help power usage in another way – some RAM systems can be told to power down chips that aren’t currently being used.

  7. I think you are right, Ian. Anything that causes cache misses will consume more power. So the smaller data structures are better, as well as smaller libraries, with heavy code reuse.

    Now let us look at something like Avalon. The data structures are large, and the API is huge: size of a Color so large they had to use a class instead of struct, all coordinates as double, Length struct with extra type data, double representation of scene graph in managed and unmanaged code, heavy use of decorators, reliance on GPU: this was not designed for small machines with limited power.

  8. I talked a while back about some issues around battery life, but recently a co-worker set me a really…