Writing code is the easy part


Having been at Microsoft for a little over four months now, I’ve started to get into the swing of the development process here. I’ve been mostly helping


Something I’ve noticed is that writing code, performance tuning, or debugging is the easy part.  However, it’s often not the most time consuming part of my day.


So what sort of non coding/debugging things take time?



  • Syncing your machine to get the latest builds and code updates (although most people start this before heading home at night.)

  • If the latest build didn’t install correctly, figuring out what went wrong.

  • Getting your various IDE & tools settings back after installing the latest build completely wipes out the previous installation.

  • Finding what you’re looking for in the vast source trees that are Whidbey and the CLR.

  • Running the same 24 step process to recreate a bug scenario for the 1019th time.

  • Updating pre-checkin tests so that they’ll continue to run after your code goes into source control.

  • Figuring out why a particular pre-checkin test fails on your machine, but not for anybody else.

  • Shepherding your submitted code changes through the gauntlet system that ensures build breakers don’t get into source control.

In short, there’s a lot of process involved in contributing code to a product. I don’t see any substantial ways that it could be made better without sacrificing quality. However, so much of it could be done by a reasonably trained monkey.  It could be worse though. An recent email from my fiancee (who’s an interior designer) had this to say: Meanwhile I am using my two college degrees and 17 years of work experience to make labels for tile.


Comments (12)

  1. This is amusing. Over in OSS Java land we have the gump (link above) to clean build all the OSS projects overnight, and send hate mail when something broke.

    So if you check your mail or the gump status page first thing in the morning, you will know if you project has been broken by anything. If it hasnt, you check out your stuff and rebuild.

    Partly this is a function of loose coupling -I dont try and update the Eclipse IDE every morning- just the things I work on. And I know that with a global developer base that someone in Korea or Germany will have hit and fixed a problem before me.

    But I think another secret is java can run everything side-by-side. There is no GAC, there is no ‘one true runtime version’; you just give each project its own version of everything, even JREs if you want. Being on the bleeding edge of nightly .net builds you dont have that luxury.

    Mind you, we all have non-replicable bugs. My build of axis marshalls xsd:time out by an hour localhost to localhost, and I think it is because I have a UK locale box in GMT0BST; nobody in the US sees the problem. Which goes to show: even the most minimal bit of unique state -here the timezone property- is enough to break something.

  2. bob dobbs says:

    How much would Java help you if your job were working on the Java runtime? Honestly Java weenies make me laugh. They think they’ve solved all the problems of software development when most of the time their job is only easy because they’re doing something that was done 10 years ago by real programmer in C/C++.

  3. Phil says:

    Other time consumers

    * Design Reviews. Necessary but the team might waste hours making up their mind on which way to go…

    * Getting consensus from project stakeholders from multiple departments. This could take months…

    * Analyze & Fix of a corrupt VSS database…

  4. Brad says:

    Sounds like this MS project is set up so that builds don’t break, you can’t break a build.

    There are benefits and drawbacks both ways, but I’ve seen the "hate mail" approach lead to night after night of broken builds; the bigger the team, the worse the problem.

  5. Steve Loughran says:

    Bob, it would be the same if you worked on the runtime, because you dont have to use the same runtime for all your code. If/when Sun open up the Java source we will write the ant build files bring it in to the bootstrap process. It already has the challenge of bringing up Ant and the XML parser without each other.

    The gump nightly build is not a language thing; the modern languages just make linking so much easier. Gump is actually written in python, and as of this morning is running mono, which currently tests the ant .net tasks, and will soon test axis interop.

    What it is is process. and here is the process

    -you make your source public in a good SCM repository.

    -you use a decent test framework, and add tests everywhere

    -you test all the time.

    What gump delivers is integration testing. It ensures that changes to one bit of code propagate across all dependencies in the chain. It isn’t perfect: it doesnt do a thing for backwards compatibility, and doesn’t bring into the loop closed source projects. But it is one of the best examples of distributed software integration I know of, CPAN being the other.

  6. Adrian says:

    One process speed up we did at a previous company was to have the build machine zip up the intermediate files (.obj, .res, .pdb). When individual developers synced with the source control (using the label for the overnight build), they also grabbed the corresponding zips. Thus they didn’t have to recompile, just re-link. This saved a lot of time, since a full build took two to three hours. Fetching the zip files and linking took ten to fifteen minutes.

    Linking on the local machine seemed to be enough to get the paths right for debugging symbols.

    Furthermore, you could sync up with any official build or any recent overnight build, so jumping back to last Friday wasn’t an expensive experiment.

    We used the hate-mail approach for broken builds, and we never went more than a day without a working overnight build. The developer responsible for the breakage had to fix it first. Meanwhile, everyone else could stay productive with the latest successful build and quickly sync with the corrected build when it became available.

  7. smidgeonsoft says:

    How much does a build-break cost in doughnuts? — :o)

  8. Matt Pietrek says:

    We don’t do donuts here (at least within the VS teams that I’m part of.) Why? Because you can’t check in a change that breaks the build (in theory at least.)

    There’s a whole system that takes your changed files, compiles them, and runs them through a series of test. Only if everything succeeds does the system make the actual check-in for you.

  9. That sounds like a good safety measure if it doesn’t get in the way. Does it take more time on your part, or is it automated enough that it feels like ordinary source control?

  10. Matt Pietrek says:

    Mike: It doesn’t really get in our way, except for the time factor.

    From a web page, we submit a set of files (or a change list.) The system takes the changes, applies them to the most recent version of the checked-in sources, and builds the whole shebang.

    Next, the system runs through the same set of tests that devs are supposed to run before they check in anything. If everything works OK, the system then makes the checkin on your behalf.

    Put another way, we don’t check in directly to the VCS. Rather, our changes go through an automated gauntlet that does it automatically.

    In theory, nothing that breaks the build will never make it into VCS. The biggest downside is that all checkins are serialized through a set of dedicated machines. Some times you’ll wait hours to get your changes into the VCS.

  11. a reader says:

    Do you still have build breaks or were those eliminated by your system?

    I believe build breaks cost is high for large teams, but I would like to point several issues.

    From my experience, such systems can cost much in terms of management, and delayed check ins.

    1) The system, which is usually built from a set of somewhat ad-hoc scripts, must be managed by at least one dedicated expert admin and is a bottleneck when everybody needs to check in before a milestone.

    2) In practice, I have spent much time investigating check-in failures that weren’t a direct result of my changes.

    3) The system env is sometimes difficult to reproduce, because there is no one official build that is installed on the dev machine. The system uses the latest tools that were checked in, so if the linker was updated, you must have scripts to copy from the system server the special tools/env.

    4) A "successful" check in of low level code may still break or delay higher level code check-in, because the low level code check in test suite can rarely cover all/most scenarios the high level code may execute. As and example, a compiler bug may pass check in tests, since these cannot even cover the entire compiler test suite (it takes too long to run it). Later, when a source code that depends on the buggy feature is checked in, the system will fail a test and will delay the application developer. Adding high level check in tests also for low level code check in is important, but cannot cover many scenarios , because it increases processing time.

    5) Note that check in tests must also use the system or changes/additions may break the product check-ins.

  12. Matt Pietrek says:

    A reader: Some comments on your points

    1)A gauntlet-like system probably may be overkill in small to medium dev teams. If you don’t have the resources to use one, don’t do it. As for being a bottleneck, you’re right.

    2)In my (admittedly limited so far) experience, I haven’t seen many problems of this nature.

    3) We use a nightly build system. A dev decides when they want to grab a particular build. The tools used to create the "official" builds don’t change frequently.

    4/5) This comes down to writing good tests, as well as good inter-team communication. The system we use doesn’t solve every issue, but it saves us from many small, stupid mistakes. On the whole, I think the additional effort is worth it.