The big framework (Uwe Keim)

Uwe Keim posted the following comment to my “Dumber” post.

But if I think about the future, I fear that we (the developers) are slightly loosing bit for bit of our knowledge to the “big framework author” (like e.g. Microsoft) and some day we suddenly see, that we must take whatever the “big framework author” gives us, because we moved step-by-step into a deep dependency on the “big framework author”.

What do you consider to be the framework, the .NET runtime components, MFC runtime DLL’s, the operating system itself ? – at all levels you are calling functions or interfaces which are abstracted from the ‘core’ functionality of the operating system, for example in a Win32 application calling LocalAlloc will allocate some memory for the application, this in turn calls down into the o/s which at some level will allocate some memory, what is the actual mechanism the o/s uses ? – do you care ? – in managed applications you ‘new up’ a new object, do you really need to know the mechanism used by the .NET runtime to make this happen ? – let’s go one step deeper, in a Win32 application calling CreateFile( ); on “COM1:“ returns a file handle to the hardware serial port “COM1”, do you need to know how this really happens ?

At some level we, as developers rely on abstraction to the underlying hardware or to object creation  – where do you draw the line on your level of dependency on the operating system or programming framework ?

– Mike

Comments (3)

  1. Mike Dimmick says:

    In really, really high performance applications, you do need to know – if allocating one way, we might get better cache locality than another. But I agree – in the main we do not.

    My main beef with the .NET Compact Framework at present is that on the relatively few occasions you need something more than the framework offers, it’s hard to interoperate. It’s difficult to host an unmanaged control on a managed form, for example, and even harder to fire events back.

    I did this with a signature-capture control – NETCF is weak on saving images to files, and our unmanaged control used to draw 2-pixel lines. System.Drawing in NETCF only offers 1-pixel pens, which looks spindly. So I adapted the unmanaged interface to make it callable from C# (it was formerly an ActiveX control) and made it fire events by sending messages to a window handle, since you can’t call back into managed code in v1.0. I then used a MessageWindow to handle the messages and fire managed events. Painful.

    I suspect that in v2.0 we could have used the existing control as-is, or rewritten it completely in C#. The latter would probably be better as it should reduce the risk of memory leaks.

    You also need to understand how the framework does what it does in order to work out what happened when it goes wrong. Again, this is more for the advanced developer – and people like me who need to see the lower levels to fully appreciate the higher ones.

  2. Jeff Atwood says:

    > is that on the relatively few occasions you need something more than the framework offers, it’s hard to interoperate

    Exactly– as long as the framework is truly all-encompassing, this approach works great. The more leaky it is (and I’m talking about your FAQ, normal developer needs), the less likely it is to work.

    The risk is is that the designers of the framework were out of touch with what developers are actually DOING. I’d focus on the areas where lots of developers are forced to work outside the framework– that’s a clear indication that whoever developed the framework screwed up, and those holes need to be patched ASAP.