ZOMG! This program is using 100% CPU!1! Think of the puppies!!11!!1!1!eleven

For some reason, people treat a program consuming 100% CPU as if it were unrepentantly running around kicking defenseless (and cute) puppies. Calm down already. I get the impression that people view the CPU usage column in Task Manager not as a diagnostic tool but as a way of counting how many puppies a program kicks per second.

While a program that consumes 100% CPU continuously (even when putatively idle) might legitimately be viewed as an unrepentant puppy-kicker, a program that consumes 100% CPU in pursuit of actually accomplishing something is hardly scorn-worthy; indeed it should be commended for efficiency!

Think of it this way: Imagine if your CPU usage never exceed 50%. You just overpaid for your computer; you're only using half of it. A task which could have been done in five minutes now takes ten. Your media player drops some frames out of your DVD playback, but that's okay, because your precious CPU meter never went all the way to the top. (Notice that the CPU meter does not turn red when CPU usage exceeds 80%. There is no "danger zone" here.)

Consider this comment where somebody described that they want their program to use less CPU but get the job done reasonably quickly. Why do you want it to use less CPU? The statement makes the implicit assumption that using less CPU is more important than getting work done as fast as possible.

You have a crowd of people at the bank and only ten tellers. If you let all the people into the lobby at once, well, then all the tellers will be busy—you will have 100% teller utilization. These people seem to think it would be better to keep all the customers waiting outside the bank and only let them into the lobby five at a time in order to keep teller utilization at 50%.

If it were done when 'tis done, then 'twere well / It were done quickly.

Rip off the band-aid.

Piss or get off the pot.

Just do it.

If you're going to go to the trouble of taking the CPU out of a low-power state, you may as well make full use of it. Otherwise, you're the person who buys a bottle of water, drinks half of it, then throws away the other half "because I'm thinking of the environment and reducing my water consumption." You're making the battery drain for double the usual length of time, halving the laptop's run time because you're trying to "conserve CPU."

If the task you are interested in is a low priority one, then set your thread priority to below-normal so it only consumes CPU time when there are no foreground tasks demanding CPU.

If you want your task to complete even when there are other foreground tasks active, then leave your task's priority at the normal level. Yes, this means that it will compete with other foreground tasks for CPU, but you just said that's what you want. If you want it to compete "but not too hard", you can sprinkle some Sleep(0) calls into your code to release your time slice before it naturally expires. If there are other foreground tasks, then you will let them run; if there aren't, then the Sleep will return immediately and your task will continue to run at full speed.

And cheerfully watch that CPU usage go all the way to 100% while your task is running. Just make sure it drops back to zero when your task is complete. You don't want to be a task which consumes 100% CPU even when there's nothing going on. That'd just be kicking puppies.

[Raymond is currently away; this message was pre-recorded.]

Clarification: Many people appear to be missing the point. So let's put it more simply: Suppose you have an algorithm that takes 5 CPU-seconds to complete. Should you use 100% CPU for 5 seconds or 50% CPU for 10 seconds? (Obviously, if you can refine your algorithm so it requires only 2 CPU-seconds, that's even better, but that's unrelated to the issue here.)

Comments (98)
  1. Sean Kearney says:

    I do have one comment to keep in mind.   the application's processing if it requires Disk I/O in any way shape or form should not eat 100% cpu… Hey 95% is good but you HAVE to leave SOMETHING for the Kernel to do it's job :)

    Failing that, set the affinity so the app doesn't steal all the core (Oooooo that was ALMOST a "you know who" joke) ;)

  2. Sean Kearney says:

    I do have one comment to keep in mind.   the application's processing if it requires Disk I/O in any way shape or form should not eat 100% cpu… Hey 95% is good but you HAVE to leave SOMETHING for the Kernel to do it's job :)

    Failing that, set the affinity so the app doesn't steal all the core (Oooooo that was ALMOST a "you know who" joke) ;)

  3. Jeff Zarnett says:

    I think there is another scenario users might have in mind where 100% CPU-usage can be a problem. Suppose you have a process that goes to 100% CPU, for whatever reason. If the user wants to cancel the action or end the program, doing so can be a painful experience. While that application takes 100% CPU, even getting the UI elements to be drawn on the screen (e.g., "are you sure you want to cancel?"-dialog) can take a very long time, and even after the user manages to order the cancellation, it can take a while for that to take effect too.

    I have seen situations where if a process goes to 100% CPU it's faster just to hit the reset-button on the front of the case than to wait for task manager to come up, select the offending process, and order its termination.

  4. Jeff Zarnett says:

    I think there is another scenario users might have in mind where 100% CPU-usage can be a problem. Suppose you have a process that goes to 100% CPU, for whatever reason. If the user wants to cancel the action or end the program, doing so can be a painful experience. While that application takes 100% CPU, even getting the UI elements to be drawn on the screen (e.g., "are you sure you want to cancel?"-dialog) can take a very long time, and even after the user manages to order the cancellation, it can take a while for that to take effect too.

    I have seen situations where if a process goes to 100% CPU it's faster just to hit the reset-button on the front of the case than to wait for task manager to come up, select the offending process, and order its termination.

  5. Matt Green says:

    Jeff, in my experience, most 100% CPU programs only do that when they jack their thread priority up too high before spinning. The more typical case is all physical memory being exhausted and the page file being hammered.

  6. Michael Kohne says:

    The users who want to never see 100% CPU utilization are the same ones who get freaked out by Win 7 using all their memory to pre-fetch stuff they might need later. What they probably really want is for the UI to remain responsive so that they can do things like cancel when things aren't working right. But they have no idea what it is they want, they just know there's a number, and it's high, and when it's not high the thing works more like what they want.

    A little introspection on the user's part would go a long way, but that's a lot to ask of some people.

  7. Joseph Koss says:

    I have so much free CPU time that it seems to put in extra effort rendering blog posts, putting them up twice seemingly just to prove that none of the bits are in error.

  8. Euro Micelli says:

    Sean Kearney: I'm not sure what you are pointing at. If the I/O is synchronous, the kernel will "do its job" during the API call; from your point of view it's all one flow of execution. Even if the I/O is asynchronous, the program can loop around calculating pi digits to its heart's content and the scheduler will still allocate processor time to the kernel as needed — of course you might not get 100% processor associated to your application anymore, but who cares? The CPU is still fully utilized (ignoring context switching overhead, etc).

    Either way, there is nothing special you need to do in your code.

  9. John says:

    In my experience the System Idle Process is the worst offender (http://www.pcmag.com/…/0,2817,1304348,00.asp) /joke

  10. Dan Bugglin says:

    @Michael exactly.  It seems that people are more worried about memory consumption these days but it's the same fallacy.

    If you manage to keep your memory usage below 50%, you can be proud of it, until you realize you could have just bought a PC with half the physical memory for the same effect AND saved money.  If you have the resources you should be using them effectively, not wasting them.

    I was checking out a list of Windows apps the other day and looked at two apparently popular maintenance tools.  They both included memory cleaning tools… my knee-jerk reaction was to dismiss the tools as probably not terribly useful (and perhaps harmful) and the authors as unknowledgeable about Windows maintenance.  Fortunately the tools didn't have any features I already had with CCleaner and the Nirsoft and Sysinternals tool suites so I didn't feel any need to check them out anyway.

  11. Erik Heemskerk says:

    100% CPU usage and not being able to start task manager is usually a case of incorrect thread priorities; explorer.exe is usually running with Normal base priority, so if another process with Normal base priority is burning up all of the CPU cycles, then yes, you're going to have a hard time terminating it. An application that has a background task with the same priority as the foreground thread, with the foreground thread burning 100% CPU is indeed going to have a hard time updating the UI. The solution would be either giving the UI thread a higher than Normal priority or giving the background thread a lower than Normal priority.

  12. Dan Bugglin says:

    Oh yeah, now I have a Raymond Chen article to link people to when this sort of thing comes up instead of having to mash words together and hope they form some degree of coherency (like now).  Thanks Raymond!

  13. K says:

    I have learned this a while ago and am constantly annoyed if an important process (Visual Studio, usually) uses less CPU than I theoretically want to spare, and I am waiting for it to do something (compile C++, usually). I'd like 1 of my 8 cores free, and the other 7 really, really busy. Someone should solve that Halting Problem so we can write a decent scheduler…

  14. Sinan Unur says:

    There is nothing more painful then having 64 cores and 512GB memory at my disposal and have only three cores utilized a few percent because the input data resides on a network drive. I would do anything to be able to have a few of those cores utilized at 100%.

  15. Henke37 says:

    There is nothing wrong with 100 % CPU usage, even when you want things to be cancelable. The trick is to prioritize the GUI over the big work. And you know, not abandoning the job of manning the message pump.

  16. Omar says:

    Sad but true. I wish my CPU (a core i7) went to 100% for just one second. I have tried hard (I do heavy dev, but somehow can't get it to go to 100 for any reasonable amount of time). I have a feeling I need an SSD.

  17. Dan Bugglin says:

    @Eric: That is usually just a problem on single core processors.  Of course, for the best user experience, you should stick sleep(0)s where appropriate in your code so single core machines can still task switch effectively and so the user can continue to use the machine.

    I also try to lower the priority of background threads or long working threads so the machine remains usable.

    Addendum: Aww, the comment box here is sad when I decide not to post (it shows "Please enter a comment" if you clear the comment box and click off of it).

  18. Alex Grigoriev says:

    Lack of runaway thread detection and of priority demotion for runaway threads in Windows kernel makes the 100% CPU a worse problem than you want to pretend it is. And it's really easy to detect such threads. If they use up their time slice continuously, they are running away and should be deprioritized.

    One notable example is Internet Explorer when it opens this glorious blog. Its terrible scripting performance just kills the puppies. It takes a few seconds of a frozen IE and CPU core pegged at 100%.

  19. John Ludlow says:

    Is this theoretical application the primary thing the user is working with?  Is the user constantly doing stuff with it?  If so, then ok, go nuts, use 100% CPU if it helps you do the job quicker.  A game can use 100% CPU because it's unlikely I'll be doing anything else whilst playing the game.

    If I have a reasonable expectation that I can use your app and be able to do other stuff at the same time, then 100%CPU is a bad thing.  Outlook using 100% duing Send/Receive is a bad thing, because reading emails in Outlook is just one thing I happen to do at work.

  20. I never thought of it that way before.  Think of all that money I've been spending on insurance, and yet I've never filed a claim, so it's all been going to waste!

  21. Leo says:

    There is value in having reserves. If Napoleon doesn't deploy two reserve battalions in a battle, it certainly doesn't mean that Napoleon should have left those battalions at home. If something is using 100% of something normally, what will it do in an emergency?

    I will agree that improved task scheduling and deployment of multiple cores are ameliorating the experience of having something use 100% of the CPU. However, historically this tended to lead to major degradation to the multitasking and UI experience.

    The perspectives of a user of such a program and the developer of such a program is of course different.

    Disclaimer: I know fare less about the subject than Raymond and many other commenters here.

  22. MikeCaron says:

    "John": Is that guy kidding? "When I hit Ctrl-Alt-Delete, I see that the System Idle Process is hogging all the resources and chewing up 95 percent of the processor's cycles. Doing what? Doing nothing?"

    Dan: Or, just make sure your UI thread is still pumping and at least as high a priority as your work thread.

    Alex Grigoriev: Why is that in any way a good idea? How do you define a run away thread? One that uses all its slice without stopping? What if it's a thread in PiCalc.exe? It's not going to stop until its told to otherwise! And, it's not buggy, it's doing math! Computers are good at math!

    What if you're playing Dwarf Fortress? No computer that exists today or any time in the next 30 years will be able to run it at anything less than 100% CPU. If, suddenly, its single thread was lowered in priority, just because some uppity kernel is annoyed that I'm using that CPU that I paid hundreds of dollars for, I'd be annoyed.

    Leo: What kind of emergency are you thinking of here? Invasion by the British? Extra CPU isn't going to help :)

  23. 110% says:

    Bloated .net/java software usually eat cpu cycles for breakfast. Lean and neat apps does not.

  24. Will says:

    At work we have a 16bit Windows applications that everyone uses. A while back the IT guy was trying to figure out an unrelated issue (network related) and noticed that a lot of the machines were at 100% CPU utilization. He was also confused that a lot of the newer machines were constantly at 50% or so utilization.  I did my best to explain how the Windows 16 bit emulation (ntvdm.exe/wowexec.exe) in Windows XP does exactly what Windows 3.x did — always be pumping messages through the message loop. That uses 100% of the CPU and the new machines were dual core, so they only had ~50% usage.

  25. jader3rd says:

    I think the reason why people think that %100 CPU utilization is kicking puppies is because that's the information that's put right in front of them when they open task manager. If task manager presented outstanding I/O requests, and page faults delta while making it difficult to discover CPU usage, all of a sudden people would focus on those as being the root cause to application slow down (which in my opinion it is).

    I've seen program use %100 of the CPU, but the computer is still responsive. I've also seen programs with infinite loops cause applications to become non responsive.

    I think that root of most problems where people see the application being non responsive is because the GUI thread is blocked on another thread doing work. I think the TPL will go a long way towards removing the times when applications will block the GUI thread.

  26. rs says:

    "creaothceann": I completely agree, it's the fan. It always annoys when I read a document onscreen, and after 15 minutes or so one of those background system maintainance services discovers I am idle and turns on the fan full-power while doing heavy work on the hard drive. To be fair, the problem is not actually the 100% CPU usage but the fact that those services do something I never requested.

  27. Brian says:

    You must always read between the lines when users complain.  When they say, "this application is using 100% CPU!" what they are really saying is, "my computer becomes slow and unresponsive when I run this application!" a legitimate complaint which should not be taken lightly.

  28. Alex says:

    A thread at normal priority spending 100% CPU time does not prevent other threads or a user to stop the application, or do other GUI or non-GUI work unless it aggressively utilizes memory (bloating its working set and draining other processes working sets) or issuing I/O requests at high rate. In this case, trying to switch to another, at first glance running at the same priority thread causes the system to increase another process' working set, probably issuing I/O requests to load pages from page file and, shrinking hungry app's working set, causing it to make a huge band of page faults.

    That is really bad and that is experience users are complaining about.

    If a developer targets Vista+, he should definitely consider the new kernel improvements (namely, memory and I/O prioritization) when writing CPU, memory and disk-hungry apps. Mark Russinovich has excellent videos about that published in his SysInternals blog recently. A quick example:

    A simple utility that just scans the entire HDD volume with 1000000+ files (and storing information about them in memory) completely kills Core i5 750 with 4GB memory, while taking only about 5% CPU. The computer is absolutely unusable despite such a small CPU usage because of its memory and I/O bloating.

    Just setting the utility to use lowest priority memory and I/O (and didn't changing any other bit in code) turns the turtle into the super-high-jet – a user continues to work on the computer as it were idle, while an app continues to work in the background.

  29. rgove says:

    Raymond: I seem to remember you acknowledging that the MSDN blog server upgrade removed the ability to make hyperlinks to individual comments. Why are you still trying to use these links in new posts?

    [So that the links will work once they fix the bug. The alternative is, what, removing the anchor and then waiting for them to fix the bug, and then going back and adding the anchors back? Since the effect with and without the anchor is the same, why complain about the anchors that don't work yet? -Raymond]
  30. jschroedl says:

    @K: For your C++ builds in VS2010 you can leave one of your eight CPUs available. Go to Tools > Options, Projects and Soltions, Build and Run then change the "maximum number of parallel project builds" setting.

  31. Aaron says:

    @Gabe:  Clock rate is proportional to the instructions per second, but they are not equivalent, even in a single-core, non-SMT cpu. Superscalar architecture has been around on x86 since the original Penitum, and that combined with pipelining allow modern CPUs to execute a dozen or more instructions per-clock-cycle.  Different parts of the CPU may also run at different rates.  The ALUs on Pentium 4 CPUs run at twice the clock rate of the CPU as a whole, for instance.  

  32. pete.d says:

    @Leo: "If something is using 100% of something normally, what will it do in an emergency?"

    That's why it's useful to lower the priority of the thread.  It's not that 100% utilization is bad.  It's that 100% utilization by a thread that doesn't want to be pre-empted is bad.  GUI threads rarely use nearly their entire quantum.

    So at the same priority, a hard-working CPU-bound thread regularly delays the GUI thread by roughly 50ms every time the GUI thread thinks it has nothing left to do.  Lowering the hard-working thread's priority allows it to still take the vast majority of CPU time (since it's the only thread that really wants it that badly), while letting threads that just want short brief moments of the CPU's time get in right away).

    In other words, in an "emergency" (however you define that), you do have headroom.  It's just taken automatically from the hard-working thread.

    That said, I think the points about laptops are somewhat valid, in that they are at least worth thinking about.  Efficiency is not the sole criteria in terms of computer utilization, and if the computer is really being used for other things while the hard-working thread is busy, it's not clear that the CPU's power mode is going to be affected one way or the other much anyway.

    The user may indeed prefer to let the computer loaf a bit, so that their lap doesn't get so hot and the fans aren't so noisy.  As a user, it would be nice to have more direct control over this (and indeed, some laptops do have BIOS-supported controls that limit overall system resource utilization, including CPU, for this very purpose).

    In the end, in spite of the laptop scenarios, I still think in the vast majority of cases (well over 90%), the programmer should just lower the thread priority for the hard-working threads and then use as much CPU time as the thread needs.  Trying to anticipate other possible user needs just complicates the code (more bugs!) and isn't guaranteed to do what the user wants anyway.

  33. Henke37 says:

    The only scenario where one user uses all the tellers that I can think up is illegal.

  34. RobertWrayUK says:

    @rgove, the answer is in the name of the blog, "The Old New Thing"… this entry was probably written before the blog software upgrade. =)

    That said, if the blog software gets fixed in the future to once again support hyperlinks to individual comments then Raymonds links will once again work as expected!

  35. ccutrer says:

    For those using Visual Studio 2010, try setting the "Multi-processor Compilation" flags in C/C++ General Properties (/MP flag to CL, or /maxcpucount to MSBuild).  This will allow parallelization of files within a single project, not just parallel builds of multiple projects.  This flag alone can cut overall compilation time in four or more.  For example, on my Core i7 machine, compiling http://github.com/mozy/mordor takes under two minutes with VS2010, and nearly 10 minutes with VS2008, since VS2008 can't keep all the cores busy.  Definitely an example of more CPU usage being better.

  36. PhilW says:

    Sean: "I do have one comment to keep in mind.   the application's processing if it requires Disk I/O in any way shape or form should not eat 100% cpu… Hey 95% is good but you HAVE to leave SOMETHING for the Kernel to do it's job :)"

    Why? 1) A well written program that does efficient asynchronous I/O may not ever need to explicitly wait for I/O to finish. 2) I hope you're not serious. You're proposing that a program (or the sum of all programs) should not exceed 95% in case the kernel can't run? Er… isn't that what Raymond is mocking?

  37. Stephen Cleary - Nito Programs says:

    @Jeff and @Matt: This is usually due to programmers who have just learned about thread priorities.

    I find that most people start out with the *opposite* conception of how thread priorities should be used. They think "hey, I've got to do a CPU-intensive thing for a while" and conclude that they should *raise* their computing thread's priority. Naturally, the correct answer is the opposite – *reduce* the priority, or just leave it alone and let Windows' dynamic priority boosting take care of everything.

    BTW, 100% CPU is good when there's work being done, but a lot of the time it is also indicative of a bug. With the programs that most "normal" (i.e., non-technical) people use, 100% CPU sustained for minutes on end (or 50% on a dual-core, etc) is more likely a bug.

  38. creaothceann says:

    Sometimes I want to run a job on my laptop in the background while I do something else (like reading ebooks or websites). I don't care if the background job takes one hour or two, but I care about the *fan noise*. In this case a program that has 100% CPU utilization is not welcome.

  39. blah says:

    "110%" beat me to it. That is not efficiency. Neither is Outlook 2007's e-mail composition context menu that takes 5-10 seconds to update its hover by redlining one core. I guess *some* people on this blog have never used Visual Studio for that matter.

  40. Gabe says:

    A clear distinction must be made. An application I requested to do some work is expected to use 100% of the CPU until it's done. A program that I did not request do any work should *not* be using 100% CPU for noticeable periods of time.

    My CPU runs at 2.5GHz, so it should be able to do 2.5 billion little things per second (neglecting things like TurboBoost, hyperthreading, multicore, and dual-issue). If a program has more than a few billion little things to do, it's natural to get suspicious. If a program thinks it has 100 billion little things to do (full CPU for 40 seconds), it's probably a bug.

  41. Lazbro says:

    'If you manage to keep your memory usage below 50%, you can be proud of it, until you realize you could have just bought a PC with half the physical memory for the same effect AND saved money.  If you have the resources you should be using them effectively, not wasting them'

    If you're using 100% memory, you're one notepad file away from having all your stuff swapped out. The point of buying more memory is to delay the point where your system grinds to a stop. I will never use all 8 GB but that's the whole point. Pegging a core with something useless otoh doesn't matter because there are plenty more of them.

  42. Evan says:

    @Jim: Fortunately we have schedulers to prevent "one customer using all ten tellers" scenarios. They aren't perfect — I've seen them go wrong in times of heavy I/O on both Windows and Linux, and if everyone is trying to page it can be a disaster — but for the most part they are pretty good. Generally speaking if one customer is using all ten tellers it's because everyone else is milling about talking to each other and filling out their deposit and withdrawl slips.

    @Lazbro: You missed the entire point, which is that it's not just program working sets that are sitting in memory and counting towards the "omg you don't have free memory" total. The OS maintains a "buffer cache" where it guesses what files you are likely to access in the near future. Potentially this could be gigs. If it's right, then it sped things up for you! If it's wrong, then oh well, that data is out on disk anyway, so it just drops a couple pages from the buffer cache. In that situation, you're no worse off than if the OS didn't maintain the buffer cache in an effort to keep memory usage down, except that in the latter scenario you'll never win. It MAKES SENSE for the OS to use basically all available memory for the buffer cache for this reason.

  43. Jim says:

    The bank teller analogy would make perfect sense if we were still running CP/M.  A better one, on a multi-tasking operating system, is that you have ten tellers, a crowd of customers, and a *single customer* is using all ten tellers.

  44. Worf says:

    Two reasons why 100% might be bad

    1) Inefficient software. Just because the previous version used only 10% is no excuse for the current version to use 100% for the same task. That's a more common complaint than you think.

    2) Battery life, especially embedded systems. Power consumption rises with the square of the voltage. If you can step down the CPU to 50%, you can save more energy than running the processor at full speed for half the time, especially if the task is real-time limited (e.g., video or music playback). A media player able to`control its CPU utilization can achieve longer battery life by using the CPU at 50% and allowing the dynamic frequency/voltage scaling (DVFS) to keep the CPU at the slower speed than 100% at full speed and idling at 0% while the rendered buffer plays out, which can force DVFS to keep CPU at full speed always or constantly switching speeds

  45. Troll says:

    Are Explorer and MSE designed to consume 100% CPU?

  46. Dave says:

    I think the issue isn't so much "uses 100% of CPU" but "*unnecessarily* uses 100% of CPU".  For example I have some CD-burning software that burns up 100% of one of my power-guzzling heat-producing cores when it's running, while virtually any other CD-burning software I've seen only uses 5-10% at most (for technical reasons I have to use this software in this case, there's no alternative).  This causes the CPU fan to spin up to an annoyingly noisy level, and power consumption to jump dramatically.  Another example is IntelliSense in VC++, which burns close to 100% of CPU every time you make a code change in some situations (there are lots of posts on blogs on how to disable IntelliSense by removing the necessary DLL as a means of correcting this).

  47. Andreas Rejbrand says:

    I do not understand the title ("ZOMG! This program is using 100% CPU!1! Think of the puppies!!11!!1!1!eleven"). Anyone care to explain it?

  48. Spinal Tap says:

    Windows 8's Task Manager will go up to 11.

  49. f0dder says:

    @ccutrer: older versions of VC++ also have the /MP switch – it just wasn't documented/exposed in the GUI. Can't remember when it was introduced, but I've definitely used it with VS2008, and probably VS2005 as well.

    It should also be noted that /MP is different from the "projects to build in parallel" setting. /MP, when used on cl.exe with multiple source files as input, spawns a compiler instance for each source file. The other setting only uses one thread per project it's building.

  50. Neil says:

    This reminds me of the days when I was a student, and a fellow student and I both wrote programs to generate a list of primes up to a million. I still have the printout somewhere. My fellow student wasn't so fortunate, because he wrote his program to generate all the primes up front and print them out afterwards, and his job got killed for being a CPU hog.

    As for not using 100% CPU being a waste, the foreground app is quite entitled to use 100% CPU because we're presumably waiting for it. It's all the background stuff that's making the foreground app unresponsive that we don't like.

  51. Paul Parks says:

    @Andreas Rejbrand: It's multi-layered, so get ready.

    "ZOMG" is a "correction" of "zOMG," which is an imitation of accidentally hitting the 'z' key while trying to press the left shift to capitalize the 'O' in "OMG." Likewise, the "!!11!!" formulation simulates difficulty with the right shift key while trying to type multiple exclamations points (shift-1 on North American keyboards, typically). "Eleven" is more irony, pointing out the tendency to type "11" among several exclamation points. The entire sentence is making fun of people who excitedly post about some new discovery on Usenet or an online forum, in the process discarding good typing practice and using "typical" Internet interjection memes such as "OMG".

    I guess you had to be there.

  52. AndyC says:

    @Worf: But it's not a good idea to try and manage CPU power within the application, far better to let the OS take care of that and use DVFS to scale back the total amount of CPU available. It has a much better idea of what is going on in the system overall, compared to just a single application.

    It's amusing to see that, no matter how good an explanation of why utilising system resources to the maximum is a good idea (and Raymond's explanation is very good) you still see the same old arguments for 'keeping some spare' dragged out.

  53. Barry Kelly says:

    Heat is a problem with 100% CPU consumption. I have a desktop machine which doesn't mind 100% spikes of a few seconds, or even a few minutes, but an hour or so (such as when transcoding video) will ultimately cause it to suspend to cool down from overheating. Sure, I could put better cooling in the system, or underclock the CPU (it's overclocked), but then I'd suffer from either excessive noise during CPU spikes, or from a slower CPU during CPU spikes.

    Meanwhile, one of my laptops gets very hot on the keyboard, uncomfortably hot, even with the fans running full whack. Any process which needlessly uses CPU – such as a web browser without FlashBlock, thus running ads – I resent, because it burns! Many times I'd trade off waiting longer for something to complete, so long as less heat is output.

  54. Marquess says:

    That is of course true. What Raymond talks about is intentionally reducing the amount of CPU usage *just for reducing CPU usage* (over time — it's of course still taking the same amount of cycles in the end).

  55. Different Alex says:

    @Jeff Zarnett: I hope you are joking, right? That is exactly the wrong attitude/reaction. A 100% CPU using program not being able to cancel an action is NOT a Windows problem. It's an application problem (i.e. a bug). The games I play constantly use around 100% of my CPU, modern ones anyway, as they use both cores. However they are just fine with cancelling something, like say, an inventory screen. I hope you don't actually use the reset button :)

    @Michael Kohne: Why Windows 7? My XP does caching all the time and I am sure it wasn't the first Windows (just the one I am using right now). 697828 system cache, 1516960 still usable yet I only have 2GB of RAM.

    What these "ram freeing" utilities do is actually allocating and filling a lot of memory in order to force unused pages to the page file so you see more "free" RAM in your task manager and brag about it. It doesn't do any good though. I hope nobody actually pays money for those. Ever.

  56. Conerned Citizen says:

    The CPU is a completely different 'beast' than RAM, which is where you are transposing this ideology from. You want the RAM filled to 100% utilization (with caches utilizing the space not occupied by running processes) because it is the fastest storage medium your computer has. That is the argument against RAM cleaners. CPU usage, on the other hand, is not AT ALL like this. The less CPU utilization (as an average over a second as represented by the task manager), the faster your code is being executed. Further, since a normal priority CPU bound thread using 100% of the CPU can bring a single CPU system to its knees, it is something to be concerned about.

    Using less CPU is ideal because that means your code was written more optimally.

    This seems like a transposed argument. Please, don't be defensive, just listen and think about it..

  57. Evan says:

    @Concerned Citizen: "The less CPU utilization (as an average over a second as represented by the task manager), the faster your code is being executed."

    You're going to have to explain that one a bit more.

    "Using less CPU is ideal because that means your code was written more optimally."

    How is "I'm going to complete in 2 seconds using 50% of the CPU" more optimal than "I'm going to complete in 1 second using 100% of the CPU"?

  58. Different Alex says:

    @Evan and "Conerned Citizen"

    "Conerned Citizen" is kinda right here too. It's just the wording I guess.

    Situation: You have input X and output Y. You have an algorithm A that transforms X into Y.

    Program i uses the CPU for 1 second at 100% to do that

    Program j uses the CPU for 1/2 second at 100% to do that, which is shown as an average of 50% over the one second interval.

    Which program is better? Clearly program j, as it does the same job by using less CPU resources. Of course things are not that easy to measure and clear cut in real world examples, which is why we end up with the big mess we are making while discussing this whole thing…

  59. ErikF says:

    For those who are worried about power consumption and/or fan noise, I would suggest that you are looking in the wrong place for solving that problem. Windows has had *power management* controls available for a long time now; perhaps you should look there first!

  60. Dave says:

    >For those who are worried about power consumption and/or fan noise,

    >I would suggest that you are looking in the wrong place for solving

    >that problem. Windows has had *power management* controls available

    >for a long time now; perhaps you should look there first!

    The only thing you can do with those is permanently cripple your system to never run at more than (say) 50% capacity.  I want it to be able to occasionally spike up to 100% for things that require a bit of extra horsepower now and then (after all I paid for that), whether it runs at 100% for 5 seconds or 50% for 10 seconds makes no difference in battery life. What I don't want is some badly-written piece of junk sucking 100% of CPU for 30 minutes. Even sucking 50% of CPU for 30 minutes (or 100% of a clock-throttled CPU if you want to look at it that way) isn't going to help, it's still killing battery life, just not as fast.

  61. f0dder says:

    @Different Alex: while I agree that those memory cleaners are almost always useless, at least *some* of them have moved to calling SetProcessWorkingSetSize() on all processes rather than doing the stupid "let's allocate until everybody pageouts". Doing the SPWSS pretty much triggers Windows' default trimming behavior, albeit doing it prematurely.

    As for the whole CPU usage thing… it isn't as black and white as some of you are painting it to be.

    The people claiming that "100% usage is bad, a program using 50% is better optimized" are obviously wrong – if you do heavy work and utilize less than 100% CPU, it simply shows you have bottlenecks in your code. How optimized the code is is judged by program runtime (and perhaps other statistics such as memory usage and file access patterns), not by looking at the CPU usage graph.

    On the other hand, I agree that it isn't *always* good to run full throttle even if you can. It's fine for foreground tasks, handling directly user-initiated actions… but for other scenarios, it can be worthwhile to throttle a bit. Yes, setting a lower thread priority for background tasks can certainly be a good idea, but it doesn't really help against laptop heat and fan noise :)

  62. joalex says:

    Unfortunately, the first thing I thought of when reading this title was how often Task Manager will eat an entire core if it is left open for a while, or sometimes across hibernations; has happened on XP, Vista and 7, single and multi-core. You might imagine the pain that causes on a single core; it can take minutes to try and Win+R -> taskkill.

    In that case, it is quite certainly kicking some puppies. Closing and re-opening Task Manager immediately fixes the problem until 'the next time.'

    I'd always wondered what made Task Manager of all things do that. Afterall, it normally usually takes less than 5% CPU time.

  63. AndyC says:

    @Different Alex: Yes, if you can change the code algorithmically to require less CPU in total to complete the same task, that's a good thing. It's entirely different from what Raymond is talking about though, where you are trying to artifically reduce the apparent CPU usage by making things take longer. That's never a good thing, you still have to use the same amount of CPU time in total, you just aren't doing it as effectively. All the counter-arguments here about battery life, fan noise or heat are incorrectly asuming it's better to try and do the OS's job in an application, an optimization which almost always produces a far less desirable result.

  64. Nils says:

    People don't buy expensive/fast computers to run bloated software.

  65. Gabe says:

    Aaron: If you can find a useful program that can execute 5 billion instructions on a single thread in one second, I will be very impressed. The average program is lucky to hit 2 IPC, and even getting one instruction per clock isn't guaranteed.

  66. Engywuck says:

    IIRC these "RAM cleaners" work on two premises:

    1) it's good to have mucho free RAM

    2) you need to have all RAM in one large piece

    So as described they try to allocate several large blocks of RAM, forcing everything swappable out to swap and after freeing claim something like "1040MB free, largest free block 768MB".

    This being wrong on so many levels…

    For starters doing so just delays every other program that needs to swapin when touched. Then they seem to not know about the difference between "real" and virtual memory addresses etc.

    There may(!) have been a time, when you needed large (physical) blocks of contiguous RAM addresses, but these time (if they existed) should be long gone.

    OTOH these programs are a nice way to test if your swap works and if some program has a problem with swapping large amounts of RAM back and forth :)

  67. Gabor Kulcsar says:

    I've had a (desktop) computer that I hated soo much – it had a very badly designed cooling system which was fairly quiet on low CPU usage, but would start to "take off" like a fighter jet if I started doing anything processor intensive (say watch youtube videos…). So I would have paid a lot of money for lowering my CPU's clock, or for any other way to stop that bloody ventillator – and I hated when programs would use 100% processor for say a minute (typical example: antivirus program update).

    I also have a laptop for work that suffers from the same problem, if not that much… so in my book, 100% CPU usage means bad design (these programs are usually have a much less responsive GUI, by the way…). I don't mind lower performance, I want a quiet workplace and responsive programs.

    The other thing that drives me crazy are programs running in the background and using the hard disk intensively, because even if they are at idle priority they make a real burden on foreground processes… I've yet to see an operating system which does have a responsive GUI all the time and handles background tasks properly.

    So while in an ideal world 100% CPU might be the most efficient way to go, in the real world it simply s*cks.

  68. matushorvath says:

    Hmm, I can imagine at least one reason why running the processor at 100% should be avoided if possible, even if there is no other process that would use the CPU time. Your reasoning seems to be based on the assumption that you can use 100% CPU time for the same price as 50% CPU time, so why not use it all. But there is a price to pay, if you run a background task with 100% CPU utilization, you will run the processor pretty hot and the fans pretty noisy. If you can do the same work with less CPU usage (e.g. polling less often), it is definitely worth the effort.

  69. Engywuck says:

    Polling is an I/O problem (primarily), but of course only polling when necessary is good practice. The problem is: how would you slow down a task where the CPU is the bottleneck? Say calculating hashes over large data already in memory, trancsoding movies, whatever. Now you have to add some sleep(100) calls only to slow down to lower-than-100%-CPU levels.

    Yes, there are tasks that can be done in far fewer CPU cycles (not recalculating everything 200 times, polling, …) but iff you are truly CPU bottlenecked that's quite hard to do. Or said otherwise: when a task uses less than 100% CPU over a long period of time it has some other bottlenecks, mostly I/O, especially network and/or harddisk.

  70. dave says:

    IIRC these "RAM cleaners" work on two premises:

    I once wrote a "RAM cleaner" for satirical purposes: all it did was purge the working set of all processes.  My intent was to expose RAM cleaners as snake-oil salesmen (this was less than one hour's programming) but a few people seemed to think I had written a useful program.

  71. rgove says:

    Raymond: Ah, it had been so long that I was assuming that it never would be fixed. I defer to your optimism.

  72. Joe says:

    "I have a desktop machine which doesn't mind 100% spikes of a few seconds, or even a few minutes, but an hour or so (such as when transcoding video) will ultimately cause it to suspend to cool down from overheating. Sure, I could put better cooling in the system, or underclock the CPU (it's overclocked), but then I'd suffer from either excessive noise during CPU spikes, or from a slower CPU during CPU spikes."

    Joke? If not, then you have failed to *succesfully* overclock your PC – you've just created an unreliable mess.

  73. James Schend says:

    @Barry Kelly, @Gabor Kulcsar:

    Surely you can't expect software developers to specifically code assuming all users have broken hardware? (You've both described hardware that's broken, at least to my definition of "broken".) Fix your computer, then if you still have the problem, you're allowed to complain to the developers.

  74. Neil (SM) says:

    @Different Alex:

    I think that explanation only holds up when the process runs for less than a second. Otherwise it's back to being inefficient.

  75. DWalker says:

    Slightly off-topic, but since others brought this up…

    A lot of people seem to have fan noise issues.  All of my desktop systems are built to run quietly.  Some of them have a fan resistor to slow down the CPU fan, and another fan resistor to slow down the large, rear-facing exhaust fan (80 or 120mm).  With correct use of good-quality silver-based thermal paste (and not too much of it), the CPU stays comfortably cool.  A couple of my systems have larger, slower-moving (and quieter) CPU fans, with 80-to-70 or 80-to-60 mm fan reducer adapters (basically, plastic shrouds).  I have not resorted to heatpipes or huge, heavy CPU heatsinks or liquid cooling, although I sometimes replace the stock heatsink with a slightly better heatsink.   :-)

    And speaking of SSDs, they are terrific.  I hope they come down in price a little.  I just bought one, and it's great.  I accidentally bought the "value" model when I meant to buy the "mainstream" model, but it still made a huge difference.  The next one I buy will be the M model.

    And don't be afraid to leave the paging file on the SSD.  Supposedly the paging file is 90% reads and 10% writes, more or less, so it's a terrific candidate to place on an SSD.  Music and videos, on the other hand, can go on the spinning disk drive.  And, you can put a laptop disk drive into a desktop CPU for more quiet goodness, to store all those videos!

  76. f0dder says:

    @DWalker59: silver thermal paste? one of the reputable (iirc it was Toms or Anand) did a test a while ago, the temperature differences between different paste was something in the range of 2C – not worth the premium. Had the extra added benefit of frying a few of the non-shroud AMD CPUs as well, when applied incorrectly :)

    Fan noise is mostly a problem in laptops anyway, where it's a bit hard adding proper 120mm fans. For long-running tasks, I'd personally rather have the task throttle a bit rather than listening to the periodic-but-definitely-annoying jet exhaust spinning up… and to other commenters: no, the OS power options don't help diddly squat against that.

    As for pagefile on an SSD… if you can afford an SSD, you should be able to afford throwing enough RAM in your system that you can disable it entirely :)

  77. DaveWill says:

    Users are fixated on the CPU because it is the only way for them to see that "something" is going on.  When they are not doing something then they expect the CPU to not do something.  Until transparency into what the CPU is doing is conveyable to the end user this conflict of concern will continue.

  78. Different Alex says:

    Yes, definitely broken hardware, if fan noise is a problem. In a desktop it is simply inexcusable. My CPU fan isn't even throttling (replace that one, as the constant off/slow/fast cycles actually made the most noise) any more. I have a simple, low noise Pabst fan that I screwed on the heat sink that came with the throttling fan I had bought earlier.

    As for laptops: yes, you can't put that 120mm fan in there (and actually the desktop's fan is only 80mm …) but you can still make low noise laptops. My Samsung R70 is really great. Under normal circumstances, no noise at all, when the CPU gets use for longer periods of time, I am usually doing something that makes noise anyway, say play a video or a game and when I am done and the fan needs to push out some residual heat, the Samsung guys apparently did some great sound engineering in that the noise (which is noticeable) doesn't have a bad pitch or anything like that.

  79. Slapout says:

    The programs that seem to slow down my computer aren't the ones with high CPU usage, they are the ones with high disk access.

  80. Ben says:

    Obviously 100% utilisation can be bad in certain situations; specifically anything with a generally real-time function. The example in the article is relevant to this, if my media player is using 100% CPU it is an indication that it can't keep up and frames and whatnot are being dropped. Worse is the system which idles along at something acceptable (like 70%), but can be perturbed by a transient condition into using 100% CPU, thereby failing timeliness constraints on the real time base load. In such a situation it is better to slow down the processing of the transient process to ensure it does not choke the system.

  81. Ben says:

    What would be cool, is if on your server you could assign a maximum process % to your various server processes thereby allowing divvying up of the CPU pie. This way one service getting overloaded would not kill the performance of others. I suppose this is already done with virtualisation and blade servers and whatnot. Is there already a way to do this under windows?

  82. Worf says:

    @Andy C: DVFS works at a lower level than applications. All DVFS cam do is see what current load is, what historical load is, and guess what future load is going to be and prepare for it.

    Applications know their load profile – they know they're going to process an MP3 for the length of that MP3, so it's either decode and idle, or slow down the decode by idling smartly so the DVFS doesn't have to handle a spike. Spiking CPU loads can cause the DVFS to artificially keep processor power up (it costs power to switch, so minimizing switching is a good thing), wasting power when the CPU is idle.

    If the app is going to crunch data for a few seconds then go idle for a while (e.g., web page display), then the browser knows it should render it at once, then pre render the scroll regions in anticipation of the use (while the CPU is at peak speed), the slow down and try to minimize utilization to keep the load low so DVFS won't switch.

    Trying to keep utilization sane while under various loads drove me nuts – if we filtered for spikes during audio playback, we got skipping video because the CPU didn't ramp up fast enough. And at the CPU utilization level, you can't determine the application in use in order to choose the right profile.

  83. Magnum says:

    Had this very issue come up the other day.  I'm working on improving response times on an old system by multi-threading it, and I said in a process meeting I was making good progress because my threads were using close to 100% CPU.

    Not one, but two managers piped up to say "No we can't have that" and despite my repeated explanations of why yes we do they insisted CPUs shouldn't run at full capacity for some reason.

    I had to pick my bottom jaw up off the floor.

  84. Evan says:

    @f0dder: "no, the OS power options don't help diddly squat against that."

    What setting are you changing, because it helps for me.

    @Different Alex: "(and actually the desktop's fan is only 80mm …)"

    Maybe yours are, but I have four fans in my desktop (I think) and all four are 120mm.

  85. Kevin Provance says:

    Task Manager?  LOL!  WTF is that?  Most evolved Windows users (programmers) use Process Explorer.  Anything else is useless.

  86. Different Alex says:

    @Evan: The 80mm reference was supposed to show that even with just 80mm vs 120mm, you can get a noise free desktop. And just so you know, the big case fan in the back is 120mm :)

    @Ben No it is not a good idea to artificially slow down your transient process by programming it that way. That, as the article states, is what priorities are for. Which of course you can use from your program, just set your priority to something lower. However, if nothing else on the system wants the CPU, this means that your process can still take 100% of the CPU power (if it can saturate that, as we have already shown at length in the comments, that most desktop processes are somehow I/O bound anyway. I do that with video encoding jobs on my server under Linux using the nice utility. The encoding runs at 100% CPU usage when nothing else wants the CPU, but as it's priority is very low, nothing else on the system suffers if it wants to have the CPU (like say video playback) if it wants the CPU. But if nothing else is using the CPU, I don't waste half the CPU. Same goes for disk I/O (ionice).

    Assigning a maximum CPU usage percentage is just the same BS and not cool. Just set priorities.

  87. Ben says:

    @Diff Alex

    I'm not sure you understand my point. First of all, I am not referring to "desktop processes", but rather server processes. More specifically server processes who have a real-time component and thus "fail" if timeliness constraints are not met. These are not, I suppose, the most common problems to be concerned with, but they are issues that I must contend with, and are relevant to the topic. With those assumptions in mind:

    One of the issue I face in coding components for my companies application that must contend with large (enormous) I/O loads that are CPU bound, is how to deal with peak loads in avalanche conditions. In the environment this software is used, base loads and peak loads are predictable and model-able and it useful for the customer to be able to tune the priorities based on the type of data. This allows continued timeliness of the base load, at the sacrifice of slower processing of data that does not have the same constraints (or less stringent constraints). In this case thread priorities is insufficient, as the control goes to the O/S and the user can not predict the behaviour. So in this context I think you can see why "artificially" slowing down the processing of transient data can be a good idea.

    To my second point; one of the issues I must contend with is that there is a strong incentive to run multiple server processes on an individual box. This is due to the fact that it significantly reduces overall cost (due to licenses). In this scenario it is important that one process that is being overwhelmed, does not affect the performance of another. This is currently achieved by binding these processes to individual cores to ensure that their CPU usage is capped. Again, giving control back to the O/S removes the users control in this context, and makes it impossible to provide performance guarantees on time-critical data. So I thought it would be nice to be able to divide up the CPU in a more fine-grained way than by core.

  88. DWalker59 says:

    f0dder: No, I didn't see that price comparison that said that cheap thermal paste vs. silver thermal paste would only result in 2 degrees C of difference.  For me, it has resulted in 5-10 degrees C of difference, even when I did not replace the heatsink.  YMMV.

    And I don't know about your price comparisons, but you could be right.  A large SSD costs a lot of money, and so does more than 4GB of DDR3 memory.  I have more than one machine that can only accept 4 GB of memory, so the 60GB value SSD was cheaper than a new motherboard, CPU, and new DDR3 memory!

    What bothers me, and I'll bet many other people, is when the CPU fan speeds up, and slows down, and speeds up, and slows down, etc.  I turned off the automatic CPU fan speed control in several motherboards, and simply reduced the fan speed (after using silver thermal paste).  That changing speed noise is much more noticeable than a constant fan speed (I think).

  89. Different Alex says:

    That does explain some things, thanks :) May I ask which operating system this would be on?

    If loads are predictable and model-able, maybe you should use an O/S where you can actually specify these things to the O/S, so that scheduling algorithms can take them into account. Redoing an O/S's work in every application is however not something I think should be done. If you need guarantees, use a real time O/S … Also it feels like "This allows continued timeliness of the base load, at the sacrifice of slower processing of data that does not have the same constraints" is kinda what I was describing with the background video conversion vs. foreground video playing. The video conversion has less stringent constraints. It should finish reasonably fast but it certainly shouldn't keep the video playback starving. So, you give video playback a high priority (higher than normal tasks, except for kernel tasks) and encoding is niced, so that it yields to any other process. That pretty much describes it I think.

    Can you elaborate on what "CPU bound I/O load" is? Is it I/O bound or CPU bound? I don't get it.

    OK, so you care about one process not starving another process. That, to me, sounds like what scheduling algorithms are all about and that is precisely what the job of an O/S is and what a lot of operating system research has gone into over the last 50 years. Why would you make every single application programmer responsible for this, when there are very bright people writing operating systems to tackle exactly this problem for you? If you need guarantees, use an operating system that can give you a guarantee, but don't be surprised if those guarantees mean that you have to sacrifice some raw performance (see soft vs. hard real time O/S).

  90. Ben says:

    @Diff Alex

    In answer to your questions. We use Windows operating systems. Yes in essence this is a hard real time system (or firm if you prefer), attempting to use a non real-time O/S (Performance requirements are confirmed by simulation during commissioning, or by having hardware that greatly exceeds the base load requirements). You may be surprised at how frequently this occurs. The why's and wherefores of this would take many paragraphs to explain, but suffice to say that no competitor in our industry actually uses a real-time O/S to my knowledge. Over 90% of installations are on Windows servers. Having got that out of the way, the load is a large I/O load that is CPU bound long before hitting I/O limitations (due to the amount of work that must be performed on each data piece). I agree that tweaking the scheduling is precisely what an application should not do in a normal situation. In this instance though, I am referring to an application balancing it's own load (for which it does indeed have an internal scheduler that provides hard-real time allocation of CPU within the context of the process). This is about trying to get the nice predictable behaviour of a real-time system. And yes, you are correct in saying that some performance may be sacrificed in pursuit of this goal (we spend a great deal of time trying to optimise this however).

    So assuming that a 20 year legacy code base cannot be adapted to a real-time OS in any sane business case, pragmatic solutions are required. From this point of view, I would love to be able to tell the O/S, "balance the CPU load as you see fit, but honour these conditions". However, the current system is fine, cores are cheap and getting cheaper; and if the CPU is mostly idle in one or two of them, our customers could not care a whit.

  91. Ben says:

    More generally, I'm not a network admin, but I imagine they face similar issues. Can you run a web server and an exchange server on the same box? What happens when the web server comes under too heavy a load, do you lose access to email as well? I imagine network admins work around this with multiple boxes or virtualisation, but I maintain that it would be great to be able to achieve the same effect by being able to put caps on resource usage. Primarily as this would save a good deal of licensing outlay.

  92. Gabe says:

    Different Alex: There are plenty of good reasons to want to throttle CPU usage instead of using priorities. For one thing, process priorities only tell the OS that one class of processes should not run if another class (those with higher priorities) wants to run.

    Let's say I want to run SETI@home and Folding@home, but I think curing cancer is more important than finding aliens, so I want SAH to use just 20% of the CPU and have FAH use the rest. I can't just set SAH to a lower priority because FAH will never (well, rarely) give it a chance to execute.

    Another example is web pages. I want to leave a web page open all the time, but the ads on the page consume vast quantities of CPU. Lowering the priority of the process will make it impossible to read my webmail while I'm compiling, yet it will still make my fan run and drain my laptop's battery. If I could set the web browser to use only, 10% of the CPU, I could still use it as I want to without it noticeably slowing down anything else and without it draining my battery.

  93. Different Alex says:

    @Ben Thanks for those explanations! If all you are doing is scheduling your own application, well yeah. But don't run anything other than your software on that system or everything goes out the window (see other stories by Raymond, like "What if more than one program does that" :)

    As for admins. I was one for a smaller company and worked closely with them in a larger one and yes, that's what they did. Smaller company, it didn't matter, load wasn't that high anyway, large company, dedicated CPU resources etc. If you are large enough that you need the resources, you need the resources. Sharing them only makes sense, if you are just idling anyway. Oh and man LPAR (i.e. admins have done this for forever), and man ulimit (at least on real UNIXes) :)

    @Gabe: As for SETI, well, bad example in my book :) Web pages, get noscript and ads don't bother you any longer. If you web browser is only allowed to use 10% of the CPU your webmail would still be slow, as the ads would suck up all of those 10% …

  94. John says:

    I'm sure Task manager is making calculated guesses most of the time, mathematically, 100% CPU means nothing left to calculate the CPU load. Hence the best form of measurement is the temperature!

  95. Gabe says:

    Different Alex: Are you aware of what ulimit does? It kills your process once it's used so many seconds of CPU time! That's completely unrelated to what Ben is asking for.

    All you're doing is coming up with hack after hack to make up for the fact that there's no simple way to just tell the OS not to let a process use too many resources at once. Using LPARs has all the administrative overhead of having separate computers; not running Folding@home means I don't get to help cure cancer; turning off scripts in my webmail means my webmail doesn't work either.

  96. Different Alex says:

    @Gabe: LPAR and ulimit were meant for Ben, LPAR especially in regards to his virtualization etc. comments yes. Also, how is an RTOS a hack if your job requires an RTOS to be executed correctly? If you don't want to do that and want to use Windows. Stop complaining.

    As for you, I am aware of the problem with you two *@homes, but I didn't like the examples. However, the browser thing: not true. I never told you to turn of scripts, but to use noscript! That's a Firefox plugin, where you can selectively disable and enable certain domain's scripts. Works like a charm. I almost never see any ads, as most ads are just loaded/displayed using javascript from easily blockable domains. E.g., who would want to execute ANY script from doubleclick.net? So you just disable doubleclick.net from executing scripts. However, hotmail.com is not disabled and you get to use your webmail… As said, it only works with almost every ad, but I can live with that.

  97. Different Alex says:

    Oh and I forgot, use the RequestPolicy plugin together with that, so you can selectively allow/disallow cross domain script execution/calling.

  98. Gabe says:

    Different Alex: Switching to a different OS is a hack because the scheduling feature he wants is only *one* of the features that his service needs. Who's to say that there exists an OS with all the features he needs? If Windows has every feature he needs except one, what's wrong with asking for that one feature?

    And I don't understand the ulimit comment. Ben was talking about putting a web server and a mail server on the same box. Where does ulimit come in? It's not like you would want to kill the web server after it's taken up too much CPU! All he wants to do is make sure a heavily-loaded web server doesn't bog down the mail server and vice-versa. It seems like this sort of thing ought to be trivial to do without the overhead of virtualization.

    What don't you like about my *@home examples? Millions of people run such software and lots of them wish they could do so without using lots of energy. Face it, CPU cycles aren't free anymore. When you use the CPU, you are using power and generating heat. In laptops this shortens battery life, causes fan noise, and can make computers too uncomfortable to hold. In data centers it drives up electricity and cooling costs.

    As for the webmail example, noscript is not a general solution. What if the ads are served from the same domain as the mail scripts? What if it detects that ads are blocked and refuses to give me mail until I unblock them? What if I'm required to use IE6? What if the slowdown is in the webmail scripts themselves rather than the ads?

    Another (real-life) example is this Flash game I like to play. It uses 100% of a CPU no matter whether it's idle (paused) or in the middle of a level, no matter how fast my CPU is. Why? I don't know. All I know is that I can only play it when my laptop is plugged in or it drains the battery in an hour or two. If I could tell the OS to give it fewer or shorter timeslices, the game should still be playable (it works just fine on slow netbooks), just without using up nearly so much power.

Comments are closed.

Skip to main content