The Pit of Success


style="mso-bidi-font-weight: normal"> style="FONT-WEIGHT: bold; FONT-SIZE: 10pt; FONT-FAMILY: Arial; mso-bidi-font-weight: normal">The
Pit of Success
style="FONT-SIZE: 10pt; FONT-FAMILY: Arial">: in stark contrast to a summit, a
peak, or a journey across a desert to find victory through many trials and
surprises, we want our customers to simply fall into winning practices by using
our platform and frameworks.  To
the extent that we make it easy to get into trouble we fail.


style="FONT-SIZE: 10pt; FONT-FAMILY: Arial"> style="mso-tab-count: 1">           
w:st="on">Rico Mariani, w:st="on">MS Research MindSwap Oct 2003. style="mso-spacerun: yes"> 


style="FONT-SIZE: 10pt; FONT-FAMILY: Arial">


style="FONT-SIZE: 10pt; FONT-FAMILY: Arial">—————–


style="FONT-SIZE: 10pt; FONT-FAMILY: Arial">


style="FONT-SIZE: 10pt; FONT-FAMILY: Arial">I had a chance hear Rico Mariani do
his stump speech on performance of managed code to an audience of very senior
technical folks (not sure why they let me in)… style="mso-spacerun: yes">  As a performance architect on the CLR
team Rico has a ton of passion for how we can change internal CLR details to
make performance (more specifically workingset) better. style="mso-spacerun: yes">  He talked about some very cool things we
are doing in Whidbey around NGen, VTable layout etc for saving a few bytes per
type or instance.  All very cool
stuff.  And in fact we see some
fairly substantial performance wins in our performance test cases in the
lab.


style="FONT-SIZE: 10pt; FONT-FAMILY: Arial">


style="FONT-SIZE: 10pt; FONT-FAMILY: Arial">But our experience with in house,
real world applications has been that they are not realizing this level of
performance win.  Why? style="mso-spacerun: yes">  Turns out their performance is dominated
by other factors.  The big wins we
realized at the CLR level are just noise compared to other performance problems
in the applications.  With just a
few days of work our pref team was able to improve the performance of one of
these in house applications more significantly than all the CLR level
improvements combined.  Their findings are
published href="http://www.gotdotnet.com/team/clr/HeadTraxReport.htm" >here
. style="mso-spacerun: yes">  style="mso-spacerun: yes"> This is NOT because the app developers
are a bunch of clowns.  Rather it is
because, as hard as we tried in V1, there were still some places where the
design of the platform leads them down the wrong path. style="mso-spacerun: yes"> 


style="FONT-SIZE: 10pt; FONT-FAMILY: Arial">


style="FONT-SIZE: 10pt; FONT-FAMILY: Arial">Because he was talking (mainly) to a
set of platform folks he admonished us to think about how we can build platforms
that lead developers to write great, high performance code such that developers
just fall into doing the “right thing”. 
Rico called this the Pit of Success. style="mso-spacerun: yes">  That concept really resonated with
me.  More generalized, it is the key
point of good API design.  We should
build APIs that steer and point developers in the right direction. style="mso-spacerun: yes">  Types should be defined with a clear
contact that communicates effectively how they are to be used (and how not
to).  I am not just talking about
the docs and samples (although those are good) but in the design of the
APIs.  For example, give the
“pretty” name to the types most developers should use (ie. “Foo” and
“FooBase”). 


style="FONT-SIZE: 10pt; FONT-FAMILY: Arial">


style="FONT-SIZE: 10pt; FONT-FAMILY: Arial">A powerful thought, crystallized
well…. Enjoy.

Comments (49)

  1. jlewicki says:

    I definitely agree with your comments about API names; it always seemed unfortunate to me that there is a class called BindingManagerBase…I really would have preferred this to be just BindingManager. My rule of thumb is that I only use a ‘Base’ suffix in a class name only when clients arent using the type in a polymorphic fashion. So CollectionBase is fine, but BindingManagerBase is not…

    Of course if there had been an IBindingManager then this would not have been a problem…

  2. Shane King says:

    Seems like a lot of great words that don’t carry much meaning.

    Pretty much every language/API/framework developer would like to have things "just work", such that the obvious way to write code was the best.

    So I don’t see a great deal of value of stating that, it should be obvious. Concrete ways of achieving this is what the big issue is. I’d love to see more and deeper usability research done on programming languages and frameworks, which might give us some data to base such work on.

    Otherwise, when you design something such that the "obvious" way is the right way, what you’re really doing is designing it such that the right way is the way that’s obvious to you. Which is a start, but not necessarily what’s good for other people.

    It’s funny how there seems to be an industry push towards usability guidelines backed up by studies and customer feedback when it comes to the UI in the end product. Yet the end product that us programmers use day in and day out is really designed on a "seems like the right thing to do to me" basis. Something that’s largely been rejected due to poor results for the end users, yet we’re made to suffer through it!

  3. Brad Abrams says:

    Shane, your point is taken. In fact we have several fulltime API usability engineer that are constantly developing and running studies about what make good API design. There is nothing like sitting on the other side of the one-way class watching a developer struggle to use an API you thought you designed well. We are working hard to give that experience to designers of as many of the API sets as we can. But on a more leveraged scale are factoring much of that information into the Design Guidelines document and then into the design of our APIs. It seems there is clearly some value in getting more of the raw usability information out… I will see what I can do.

  4. Phil Syme says:

    Although this is tangentially related, I wanted to add a "real life" experience. At my current day job, we’re developing an extensive WinForms based desktop withing an n-tier .Net app. Our target deployment platforms is Win2k machines @500 Mhz w/256Mb ram (I would bet that this platform is the norm for large corporations such as the one I work for.) We recently started Ngen’ing on deploy, and the performance difference is incredible (for the better). From casual observation using perfmon, it appears that code is being JITed “all of the time” – possibly re-JITed or something else. The JITing is taking up most of the CPU most of the time on these low-end machines. Of course we will figure out the exact story of what and why this is so with enough time and effort, and correct the bad thing(s) we’ve done code-wise so that Ngen won’t make such a difference.

    My point is that I shouldn’t be in this situation. I’ve been a C/C++ guy for many years, and it’s my experience that you have to try very hard to not do the right thing to get bad performance from a C++ implementation. The same should hold true for .Net related activities.

    Please, throw me into the pit.

  5. KiwiBlue says:

    Phil, I have to disagree with you on ‘trying very hard to not to do the right thing’ with respect to C++ implementation. Recently, I had to review a piece of C++ code which had been passing large vectors by value to various methods. Since the guy who wrote the code no longer works here, I don’t know if he was just missing const &, or had no idea of how objects are passed by value in C++ (and how time-expensive memory allocation is).

  6. madhu says:

    Brad,
    link to gotdotnet page you have in the post is not working can you repost the correct link

  7. Brad Abrams says:

    Sorry about that — fixed

  8. John Dhom says:

    I like the metaphor… especially coming from a CLR guy 😉 At first read (this blog and the headtrax perfwhacking session) I can see how you can do some of this with the CLR. It’s a little less clear as you move up the food chain. I hope you find the following of interest… here’s a half step up the food chain.

    Jitrz (java/.net) can do some interesting things beyond loading/compiling. Jitrz do late-bound, for lack of proper term, optimizations… things that take into account the run-time/access behavior… things that cannot be known at compile time (static). They establish an internal feedback loop regarding code and use.

    Now take the Headtrax perfwhacking report… much attention was paid to assemblies, disk access, etc…. even suggestions on optimizing deployment assemblies based on run-time specifics vs the logical model. Gah! [note: i understand why] I’d like the CLR (et. al.) to do for my assembly loading what the Jitrz do for late-bound optimization. In other words… figure it out and fix it. Ha! It has all the info on access patterns and probably load times.

    So in my perfect pit, headtrax developers too, the assembly performance (load) and access patterns would be analyzed and assemblies "regrouped" to improve performance. I can keep my [il]logical assembly groupings… the CLR optimizes it over time (based on it’s current implementation and how users "use" the software)… and users get good performance by accident… I mean fall into the Pit of Success.

    /jhd

  9. Frank Hileman says:

    The perf advantages of JIT compiling mentioned by John Dhom seem only to apply to server side applications. For client side apps, the user is so annoyed by the JIT delays that the JIT compiler cannot do much of any optimization.

    Consider that the new release of Office is not written in managed code. From the blog of Jason Zander: "We did some experiments early on in the Runtime as proof of concept for our managed C++ compiler which included recompiling Word as an IL image. It worked great! But it was slow. Office is a big application, and using the JIT for this case didn’t put our best foot forward."

    MS cannot eat their own dogfood, but it is on our plates. The JIT compiler simply does not make sense for client side apps. Nor does it make sense for tiny devices such as the PocketPC. I can understand how small bits of code downloaded from the internet need JIT compiling for security. But to compile every installed client app, every time it is run, and to throw away the results: such a wasteful process (by default) throws us into the pit of failure! Also loading the IL when only the native code and metadata is needed, in an ngened app, makes no sense. Why does MS release and recommend technology that is not good enough for its own applications?

  10. Anonymous MS Developer says:

    I think you’ll find that managed code is being used in many more places as time goes on, there is a ramp-up time and no one thinks that rewriting an application for the fun of it is the best solution.

    The CLR team is doing some amazing things to improve perf and working set dramatically. Even then, it’s not appropriate for all cases.

    The truth is that some things will remain unmanaged for a long time. But it’s also true that many of the new features you see in new Microsoft products will be managed code. Yukon, Longhorn, WinFS, Avalon, Indigo – these all revolve around managed code. It’s incredible the amount of investment there is.

    Good suggestions for doing NGEN; but as the Headtrax example shows, NGEN doesn’t always help, and sometimes can hurt.

    Your focus on ‘big’ client apps may be misplaced; if anything a nice model is developing where a smaller amount of code mixed in with declarative programming (ASP.Net) is the sweet spot of development. Often, users want components that fit into a larger experience, rather than an all-encompassing experience on its own. Not every application is Office or Photoshop. If the cost of loading the CLR is already paid by the host application, the marginal cost of JIT for a component isn’t too bad.

  11. Frank Hileman says:

    Thank you for the response, anonymous. I was not only focusing on big client apps. If you look at the perf and windows forms newsgroups you will see a common complaint: the old app, ported to .net, starts up too slow. Often the problem is the jit on the designer generated code. Another common problem is the number of and size of dlls loaded. Even small apps often start up slowly compared to older C++ versions.

    Is the solution profiling, possibly ngen, and reworking the app? This is the only solution currently, but it is contrary to the "pit of success" idea at the top of this page.

    I don’t believe there must be a trade-off. We should be able to use managed code and not have to think about performance any more than with a C++ app — that is, focus on algorithms and data structures, not jit compile time. If something is jit once, it should be saved, and reused next time. Or ngen should be the default for desktop apps. Managed code is a great thing, but if the perf problems are not addressed, for client apps it may gradually aquire a reputation similar to that of Java client side code: slow and fat.

    I think managed code has the potential to replace almost all application code. Not all OS code, or games, but almost everywhere C or C++ is used. A real focus on perfomance and the idea of the "pit of success", by MS, would take it in that direction.

  12. My team is starting detailed design and implementation work over the next week or so and I know of few…

  13. I recently had a fun time baking cookies with my three year old son.. He had a great time scooping out…

  14. Programming says:

    Most software projects fail . But that doesn't mean yours has to. The first question you should ask

  15. Programming says:

    Eric Lippert notes the perils of programming in C++ : I often think of C++ as my own personal Pit of

  16. When you design your API, make it easy to use and hard to misuse. Brad Abrams quotes Rico Mariani on

  17. .Net World says:

    When you design your API, make it easy to use and hard to misuse. Brad Abrams quotes Rico Mariani on

  18. ASPInsiders says:

    I’ve been getting more and more interested in how folks extend their applications using plugins and things.

  19. Tim Barcz says:

    Yesterday Oren announced the release of RhinoMocks 3.5 .  While the RC version has been out for

  20. Yesterday Oren announced the release of RhinoMocks 3.5 .  While the RC version has been out for

  21. Jimmy Bogard says:

    One of my pet peeve questions I often see on various tech mailing lists is “How can I prevent situation

  22. Eric Hexter says:

    One of the features I was really excited about for the MVC RC was the template/Model based scaffolding

  23. Note: This article is submitted by Damon Payne for Silverlight: Write and Win contest .Thanks a lot,

Skip to main content