Productivity — but to what extreme?


By far the most important feature of the CLR (and WinFX and managed code generally) is developer productivity. We want developers to be able to build better applications, faster on the managed platform than anywhere else.  As a side note,  I once saw a presentation that demonstrated how this principle actually helped “solve world hunger” by letting developers get projects done faster and spend the extra time on social issues…  Now this does not mean that stuff like security, web services, performance, etc are not important, they certainly are, but productivity is our reason for being. 


 


Now I am not sure we are exactly solving world hunger, but I do think we are letting developers focus on their core business problem while they leave the plumbing to the platform.    As much as I’d love to say this is all because of the greatness of the CLR or the consistency of the API set, it also has a ton to do with the quality of the tools support we get with Visual Studio.  


 


I think that is all motherhood and apple pie right?  (love to hear your feedback) So where am I going with this?  


 


Well, today I had a heated discussion with some of the smartest people on the CLR about the balance between developer productivity and performance.  The debate goes something like this: 


            Me: Feature X will make developers more productive, so we should do it!


            Other Guy: Feature X will make developers apps so slow it will not mater, they will use something else…


 


Now clearly, both positions are a little extreme… but there is clearly a lot of room in the grey area… Where should we fall in that gray area?  When I think of environments such as VBRun, they clearly prioritized developer productivity above performance (although, VBRun perf was actually pretty good).  Other environments such as the C runtime library clearly prioritized performance ahead of productivity.


 


So,  what advice would you give my team?  Say you had $100 to spend abstractly on “more productivity features” or “better performance” where would you spend your money?  And if you have any hardnosed folks that still eat-and-drink unmanaged (win32) code all day long, I’d love to hear from them as well.


 


Thanks!

Comments (38)

  1. Productivity! says:

    I can buy more hardware, I can’t buy more hours in the day.

  2. Erik says:

    Productivity… Upgrading machines is cheap compared to hiring new people and training them to get them up to speed just so that they can help implement the backlog of features we need in the next version of our app because we spend too long with verbose code to do what we need instead of more time chasing the random bugs and quirks that always seem to pop up. (Yes that was spit out as one big sentence. :-p)

  3. Keith Hill says:

    I’ve always liked a tiered approach. That is, some underlying feature is fast but the way to access it and get all that performance may not be particularly easy. Then you build some sort of easy access layer on top of it that makes the feature easier to use but you take a perf hit. That way, everybody is happy.

    Now it may turn out in your case this layered approach won’t work. If so, it’s really hard to say because first of all my performance goals aren’t likely to be the same as someone else’s. What your opponent believes is a perf issue might not matter at all to me but it might be critical to someone else. Obviously MS believes that garbage collection was worth the perf hit in order to make development easier (and more reliable). But at some point in the past, the perf hit was bad enough to make that solution not commercially viable.

    I really seem to be dancing around on this reply. :-( OK, to answer your question based on the assumed truth of the statement "so slow it will not matter", I’d put $80 on making it fast now, with $20 to make it as easy to use as makes sense. Later on you can think about a simpler API, if needed, to improve developer productivty kind of like VB’s My classes concept. BTW, for other scenarios where perf isn’t such a critical issue I switch that to $70 dev productivity and $30 perf.

  4. Lonnie McCullough says:

    I was a C++ junkie. I loved the power and control I had over the machine and didn’t mind the responsibilty it forced upon me to manage my own resources. I figured if I allocated it I’ll take care of it. I’ve been working almost exclusively in Managed Code (C#) for the past month or so and I have to say that I am amazed by the elegance of the platform and the simplicity of doing almost anything. I had to write an ActiveX control in C++ earlier this week and the experience was a little jarring. I’ve never realized I could be this productive and now I have more time to make love to my girlfriend and eat ice cream! C# Rocks!

  5. The answer to your question is glaringly obvious. The ongoing increases in hardware speeds means that the people on your team worrying about nebulous performance issues simply aren’t paying attention. Developer time is vastly more expensive than better hardware. I’d be willing to bet that the people who claim that the productivity adding features are "too expensive" have never actually attempted to profile them and find out….

  6. Pavel Lebedinsky says:

    > Say you had $100 to spend abstractly on “more productivity features” or “better performance” where would you spend your money?

    How about better reliability and code quality?

    Productivity is already higher than it should be in my opinion. Instead of a feature that allows people to write shitty code 50% faster I would rather see a feature that makes it 50% more difficult to write such code (integrate FxCop and AppVerifier with VS, add more debug checks, fix or deprecate dangerous APIs, etc).

  7. Machines are a finite resource. While we spend all of our money buying new machines we could have just as easily implemented the same algorithms on our old machines with a focus on performance. It is always sad to see a 1 year old machine get kicked to the side because it was perceived to be too slow to complete a crucial computation.

    What we don’t realize is that we are at a crucial juncture where we are enabling people through computing that can’t buy a new machine every six months. We are making promises still to have broadband in every home and a laptop in the hands of every grade school student. These are individuals that need to run our software on old machines, cheap machines, hand-me-downs if you will. We need to focus on performance to enable a broader adoption.

    People still have PII’s out there and old Athlon boxes, and they get online every day and run apps from all over the place. You start throwing .NET apps on those same machines that operate completely fine without it and you find the .NET apps run slowly and sluggish compartively. That just isn’t a good way to introduce your new platform, no matter how bound you are to disk-based IO, and no matter how you are tuned to operate best with 256 megs of RAM or more.

  8. Machines are a finite resource. While we spend all of our money buying new machines we could have just as easily implemented the same algorithms on our old machines with a focus on performance. It is always sad to see a 1 year old machine get kicked to the side because it was perceived to be too slow to complete a crucial computation.

    What we don’t realize is that we are at a crucial juncture where we are enabling people through computing that can’t buy a new machine every six months. We are making promises still to have broadband in every home and a laptop in the hands of every grade school student. These are individuals that need to run our software on old machines, cheap machines, hand-me-downs if you will. We need to focus on performance to enable a broader adoption.

    People still have PII’s out there and old Athlon boxes, and they get online every day and run apps from all over the place. You start throwing .NET apps on those same machines that operate completely fine without it and you find the .NET apps run slowly and sluggish compartively. That just isn’t a good way to introduce your new platform, no matter how bound you are to disk-based IO, and no matter how you are tuned to operate best with 256 megs of RAM or more.

    How would I spend the money? Split it down the middle at 50/50. If I don’t need to go out and buy new machines all the time, I’ll have more money in my pocket to spend on your software so you can hire more devs and have better overall feature implementation. I don’t care how you split 100 dollars, when you could be spending 200 dollars instead had you not forced me to buy a new machine.

  9. Martin Taillefer says:

    I think the issue is one of superficial productivity gains.

    If an interface is easy to use, folks will use it and move on with life. Later, when it comes time to "get real" with their product, their QA department may complain that the software isn’t clearing the performance bar. The developer then faces the difficult and expensive task of pouring over code to discover what’s slow and understand why. To me this a potential disastrous loss of productivity, especially if the interface in question is large and the developer has built a large amount of functionality around it.

    Claiming that hardware resources are cheap and developer time is expensive is not living in the real world. For most commercial non-proprietary scenarios, software performance remains critical.

  10. Keith Patrick says:

    It’s managed code, so we can expect performance to not be optimal. There are 2 things I primarily want to achieve by targeting .Net (well, outside gainful employment): 1) writing code quickly, leveraging a robust framework, and 2) writing code that itself is robust. The only performance gains I am interested in are those I get by leveraging the existing framework (I assume MS can write better-performing code than I can) and by high-level algorithm optimizations. I don’t care about unwinding loops (and in fact, I don’t. I want my code simple and easy to maintain) or low-level stuff like that; it’s a tradeoff I expect from going to *managed* code. Besides, the way I approach low-level perf is: if I can spend 2 weeks tweaking my memory usage (one caveat: I try to write my code in a manner suggested by MS design guides such that I get decent to optimal usage by "playing by the rules"), it doesn’t matter because I doubt the gains will be that great, and also, hardware can catch up to it pretty quickly.

  11. Alex Kazovic says:

    Many years ago when I first started programming I used assembler. Then moved on to C, then C++ then VB. I made these moves for productivity although I lost a certain amount of control/speed.

    Currently there are very few people who use assembler. Therefore, the majority of people to one extent or another have made that compromise. This leads to 2 points:

    1. Your question is too coarsely grained. It depends on the productivity gain and the amount of speed lost.

    2. It also depends at which point in time the question is asked. The compromise might not be appropriate now, but at some point in the future it might. As technology improves there is a natural bias in favour of productivity. If one is designing something that will last a long time that it is important that this bias is borne in mind and though is given to when the crossover (in favour of productivity) is likely to occur for most people.

    Alex Kazovic

  12. Although it might be easy to conclude that productivity should always be the first priority because you can buy faster hardware anyway and for less than you can get developer time, it’s a bit too simplified.

    First of all, in order for performance to really matter we’re talking about client/server scenario’s (performance in desktop apps is unimportant, or rather, let the gamedevelopers write assembler to differentiate themselves!)

    On the server, it’s not always true that you can just keep buying more hardware to speed things up. At a certain point you’ve hit the top and to increase speed from there on, you need to get into clustering and other complicated things, which in turn again requires people to set it up, maintain it or even adjust software to run correctly on it. And judging by how slow websites are becoming and how internet usage is still growing, I’d say that’s more and more turning into a major issue.

    So for typical desktop applications, $100 on productivity. For server-side applications, $50/50. We need productivity there as well, because without it, we won’t have anything worthwhile to run quickly anytime soon anyway.

  13. Rod widdowson says:

    I’m with Pavel on this.

    In many (but not all) situations developers could afford to be less productive but

    be forced to writer more sustainable code.

  14. Merak says:

    As is the case in many situations, the answer from my POV is "It depends".

    It often depends on the application you are writing.

    To cover all cases, I’d prefer to have access to the lower levels of an API to give me more control over perf (and access to the less commonly used, subtle features), but still have an encompassing API I can call upon to do the bread and butter stuff for the productivity gains.

    However, I would also like the encompassing API to either:

    – be part of the API it uses (eg quick file access routines should be in System.IO along with the raw access to Streams, etc)

    AND/OR

    – the source to the high-level API should be supplied (even if only in documentation form) for those cases where the high productivity gain cannot be realised due to slight differences in its function and my specific requirement.

    IMHO, having access to the source to the high-level function would allow for much higher productivity gains even in those situations where the "out of the box" version is insufficient. We could either use the source as a basis for the new more specific function we required, or simply as an example (which we know works)

  15. Andrew Tweddle says:

    Productivity = lower cost of development

    Performance = higher usability benefit to the end user

    ASP.Net = lower deployment cost to the user

    Windows Forms = much higher usability benefit to the user

    Cost vs benefit: Cost is easy to measure, benefit is very hard to measure.

    And in corporate environments, the decision-maker is seldom the end-user.

    And places much higher emphasis on quantifiable factors, not fuzzy factors.

    So, in this context, cost affects the sale, but usability doesn’t. So you should optimise for productivity rather than performance.

    By us developers selling more software, MS makes more $ to plow back into better performance in future (e.g. when investment in productivity becomes marginal, and/or hardware advances slow down).

    Also: time spent on better performance is often not an investment since hardware advances soon provide the same speed benefits "for free".

  16. David Levine says:

    Performance is a feature too.

    As others have pointed out, there is a trade-off between writing performant code versus being productive. The proper mix depends almost entirely on the type of software being written. The further down the software food chain you go, from client applications towards servers towards libraries towards device drivers and operating systems, etc., the greater the need for performance. If you’re calling a method 1000000 in a tight loop the greater the need for that method to be performant. As the performance of the low-level code increases the need for the high level code to be performant descreases, allowing the developer to focus on being productive rather then performance-oriented.

    Another way of thinking of this is in terms of apparent productivity. If the software being devloped is intended to be used by many other developers (e.g. libraries such as the BCL), then that developer should spend more time on performance even if it appears the developer is not as productive as the next codeslinger because it allows everyone using the code to be more productive. It’s a productivity multiplier.

    C# is targeted at apps fairly high up the software foodchain and IMHO is focused more on productivity over performance, and rightfully so. There is a point where performance becomes an overriding issue but for C# I think productivity is usually more important then performance. I wouldn’t want to write a device driver in C# (actually I would but C# isn’t ready for it, but one day…) but I wouldn’t want to write a .NET rich client in C either.

    Bottom line – it depends on what type of software is being developed.

  17. Jerry Pisk says:

    You can’t always buy better hardware. If you deal with hundreds of millions of records (in terabytes) of data there just isn’t any hardware available to handle it. You have to code for performance or at least for scalability, your code has to be written so it can run distributed. Easy code ("written" using your mouse) just doesn’t cut it here. Microsoft website’s backstage articles offer some insight, start with simple, mouse and wizard generated code but then tweak it so it can be actually used in a production environment.

    As for my personal value ladder:

    1. Readability

    2. Performance

    3. Productivity

    Most of the times the productive code will have to be tweaked anyways, so all the productivity gains go down the drain, escpecially when the changes have to be done by a different developer, who knows a little more than how to drag a database connection onto a form.

  18. As far as .NET changing the world, you’ll likely be inspired by the NxOpinion application. For a quick ramp-up, check out the 4 1/2 minute video case study linked to in the upper-right corner of the Microsoft PressPass story: http://www.microsoft.com/presspass/features/2004/Jan04/01-21NxOpinion.asp

  19. Productivity!

    In 18-24 months, on state-of-the-art hardware, my app’ll be twice as fast anyway.

    I’m going to get more performance gains from adjusting my programming habits than I’ll get from any "tweak". And if the programming environment lets me change habits without having to kick dead whales down the beach, then I’ll be more likely to make drastic and necessary changes.

  20. I don’t believe the developer. There is no trade-off between productivity and performance, in reality. I have never seen a single situation where a true trade-off exists.

    Developers who say there is a trade-off are thinking about a fundamental design flaw that produces the perceived "trade-off". The problem is the design flaw — perhaps it goes so deep, it will take enormous effort to eliminate it. But if a design produces a trade-off between productivity and performance, it is by definition, flawed. Fix the flaw, and the trade-off goes away — you can have both productivity and performance.

    Post the "trade-off" and let us dissect it!

  21. J. Daniel Smith says:

    I’m with Pavel too.

    For most applications, the current productivity/performance mix in .NET is sufficient; that’s not to say there isn’t room for improvement.

    But being able to crank out hundreds of lines of poor C# (or VB.NET) code doesn’t help productivity over the entire life of an application (through multiple revisions, teams, companies, etc.). Make it possible to enforce/check all the various rules/guidelines/styles/etc. that people write about. And I’m not talking about (relatively) superficial things like spelling/casing or indentation.

    Things like the refactoring tools that are part of Visual Studio 2005 are a step in this direction; the biggest gain to overall productivity is increased focus in areas like this. Don’t let the developer write crappy code.

  22. Erik Sargent says:

    I’ll side with those on the "Productivity means writing GOOD code, not just the number of lines."

    Brad, your comment, "Now this does not mean that stuff like security, web services, performance, etc are not important, they certainly are, but productivity is our reason for being." is very revealing about the bias at Microsoft about what "productivity means. Writing lots of insecure code isn’t productive. Writing lots of code that isn’t maintainable because it is poorly written isn’t productive in the long run. Productivity is not lines of code.

    I’m very excited about what I’ve seen in VS2005 in terms of productivity. And I generally do agree that productivity is more important than performance. How many times have I spoken at developer events and gotten questions about how Windows and/or SQL Server and/or ASP.NET performs. I never answer the question, I just ask how many of them are currently working on a project where, ignoring security, server administration and just focusing on perfomance, how many of them are working on a project where a single dual-proc server with a couple Gigs of RAM in it couldn’t handle the load? I load balance all my servers just so I can do application upgrades without downtime, but none of my apps need to be load balanced to handle the load. Very few people can say otherwise.

    We do have several 3rd party ISV apps that have to be load balanced, but that is because they were written with memory leaks and lose track of database connections, not because of the amount of traffic.

    Well, that pretty much answers it for me. We need better code, not more code. And we need better code, not faster code. We need tools like FxCop and NUnit presented in a way that help developers who aren’t experienced in a disciplined environment learn about the tools and how to use them. And then have VS make very easy, but tight integration with the tools. Those of us who use tools like these all the time find them easy enough, but if I hadn’t had someone teaching me NUnit for a couple days several years ago, I would have given up. FxCop was easy, but it had to be run separately and until recently you couldn’t make the changes to your code while you had it open because it locked the dll!

  23. Unfortunately there’s no Moore’s law for software development. Therefore, any productivity gains we can get are valuable. I figure you guys are smart enough to keep making the engine run fast under the hood.

    Of course balancing that against any performance hits that are so bad as to make the app unusable. No point in being able to build even MORE unusable apps.

  24. Mike Dunn says:

    I always think about what Rico said in his perf presentation (available on MSDN TV) about how .NET is making formerly-complex things appear to be so bloody simple that devs will just use the simple way without thinking of the ramifications. The key word is _appear_. It’s no simpler now to parse a data file and build a b-tree in memory out of it, the work has just been pushed to the CLR for file/memory management. The code that the dev writes looks simpler ("if it fails, I’ll just null the root, it all falls on the floor, and the GC will clean it up") but that doesn’t make the resulting code any faster or better.

    Anyway, the point here is that you can’t measure "productivity" with some quick cool-looking stat like KLOC per developer-day. The only development-related stats that matter to the execs are "did we ship on time?" and "are we under budget?". I guess you can improve on those stats (shipping sooner with less cost) by using a .NET language, but again it’s not a magical solution. You still need smart devs who actually care about writing good-quality code, not hacks who just churn out stuff with no error-checking, maybe running it once to make sure it doesn’t crash, then move on to the next thing.

  25. NoSpam@dev.null says:

    Here’re a couple of quote that I think apply:

    "The ability to simplify means to eliminate the unnecessary so that the necessary may speak."

    -Hans Hofmann, Introduction to the Bootstrap, 1993

    "Things should be made as simple as possible, but not any simpler."

    – Einstein

    Here’re some questions to ask yourself:

    Why did Java displace a lot of COM in the marketplace?

    What’s J2EE’s chief complaint among developers?

    What would .NET adoption look like it were twice as fast as opposed to half as hard?

  26. eAndy says:

    $85 for productivity

    $10 documenting the fact that there are faster ways of doing with samples

    $5 documenting the concrete decision points to go with productive or fast

    there will always be cases where perf completely rules and must exist. Educate people about the costs of your productivity hacks. Tell them what the alternatives are, why the productivity gain exists and what that means to perf.

  27. I’m going to toss one final comment even though it will get buried. eAndy, has pointed out that documentation may be the key to a true reliability story. Proper documentation can lead users to write better code, and to find code that is most ideal for what they are trying to do. Crosslinks and crossrefs can confuse some users, but I think the ability to turn on such a powerful feature for the dev that doesn’t mind reading through 5 different options so he can choose the best is indispensable.

    Brian Grunkmeyer recently commented on why some cancellation code I had written was flawed. I knew it was flawed the moment I put it out there, however, it solved a problem that needed to be solved, and so I used it. Now, he mentioned that there were a number of things that could go wrong that made my code bad, and my response was quite clear. 1) Tell me what I can look for to minimize my risk through documentation, 2) Fix those things that I couldn’t work-around without being the BCL. I highly recommend reading any posts by Brian Grunkmeyer as he is an extremely intelligent guy and is responsible for quite a bit of the performance under the hood that everyone talks about. However, realizing the trade-offs that are made through his comments is key to understanding why performance at the developer level is so important since it exists to augment every location in the BCL where they had to make a trade-off.

    http://weblogs.asp.net/justin_rogers/archive/2004/05/22/139649.aspx

    That forces me to decide to take another $10 from the productivity crew and give it to the UE crew so they can make better documentation. I hate hearing things like, well, we didn’t have enough time to focus on perf for the first version, so we punted it to the second version. Why bother punting it to the second version, when you know that all of the productivity features being added are going to again put pressure on the ability to focus on performance?

    Stop making that trade-off, as Frank Hileman wrote, and instead find ways to remove the trade-off. If that means providing one less feature to improve 4 others, the improvement on the 4 others is also a feature of the sys

    tem.

  28. Peter Dhonau says:

    Hard to give a meaningful answer without more information, but in general my opinion is go for performance with the caveat that caution is advised. Sure, for some developers hardware upgrades are no problem, but there are many who for reasons of personal finance or corporate IT policy won’t be able to upgrade so quickly – and we all know how quickly the experience of using performance-hamstrung software products palls. It depends how badly the latter group will be thus affected, and also the relative size of the two groups. Annoying 50% of developers is going to cause brouhaha. Annoying 10% while pleasing most of the others is not.

  29. Antoine de Saint-Exupery says:

    "You know you’ve achieved perfection in design, not when you have nothing more to add, but when you have nothing more to take away."

  30. Eric Siegel says:

    I want people to write tools and frameworks spending $100 dollars on performance so that the rest of the world can spend $100 on productivity.

  31. Deep Kocheta says:

    PRODUCTIVITY UP HERE, PERFORMANCE DOWN BELOW !

    Improving developer productivity (DP) should not end up making the infrastructure dog-slow. If there is an easy, less performant way of doing a task, there better be a more difficult and less performant way of doing it too. In the old days, VB and C++ managed this contradiction (almost) perfectly.

    In the infrastructure, I would definitely go for performance because developer productivity can (usually) be improved by language specific features (which may even generate ugly, slow MSIL under the covers), code generators, and libraries.

    Productivity should be a layer on top of performance – not a replacement for it.

    The choice should be left to the application developer, to decide the trade-offs. This principle seems to have been followed so far in the .Net infrastructure – it’s still possible to run unmanaged code by going down a layer.

  32. Deep Kocheta says:

    Correction:

    If there is an easy, slow performing way of doing a task, there better be a more performant way of doing it too -even though it may be more difficult.

  33. W Poust says:

    Productivity versus performance is a tough question. Thinking about what kind of situation I, as Joe Developer, normally like to develop by my general use of high-level productive API but suddenly have performance issues with an API leads me to:

    $15 = productive

    $15 = performance

    $70 = providing source code/documentation so that I can debug into the highl-level app and see why I have the issue.

    As it stands right now, the .NET framework is a black box. There is absolutely no way for me to know what are ramifications of choosing a particular method over a different overload and doing some of the preliminary work myself.

  34. I don’t envy having to make this determination. What I will say is that for my own part, I’d probably spend the money 50/50.

    My bias: I have worked for a lot of small-medium sized companies for whom Time-To-Market was every bit as important as overall performance when the product got there.

    If I was approached to trim my $100 budget by 10%, I’d probably (after a lengthy budget battle) take the $10 completely off the performance side. Though rarely ideal, its likely easier to explain to a customer that they need an extra server afterwards, than it is to explain why their software will take 6 months longer to deliver.