Posting check-in mails to a blog

AKipman proposes a very interesting idea. He says:

"Lastly *I think we should be posting check-in mails* and I will be doing so in the future. I already do so internally every time we post an LKG (last known good) to our internal partners, and I will be posting the same list externally when we ship the beta of Whidbey. I think this is the single most valuable information a developer can have when he/she are trying to figure out what the delta is since the last technology preview, alpha etc. What has changed? What are the breaking changes? What are the new features I should be pounding on? I strongly recommend this to any feature team, as this is one piece of information our customers have to *discover* by trial and error, and this would give them a clear roadmap of

  1. what to look forward to,
  2. what to play with first and
  3. prepare them for what will break since the last time we did this.


I have to say that I agree with him totally on this. Specifically what I want is:

  1. Our bug databases should be made available online to the public. Both for viewing bugs and reporting new ones
  2. Check-in mails for bug fixes should then be made available publicly when we hit the milestones that we release to publicly
  3. Faster release cycle so that these fixes can be rolled out to quicker
  4. Information about whether the bug has an active test running full time to prevent regressions so that everyone knows that this issue will be caught if it comes up again

This would provide a very tight loop between us and the users directly using our products. It would help improve the quality of the product and would allow customers to make an informed choice as to whether they should be downloading this new release or not.

Another thing that I think this would help with is the propensity for developers to send out a check-in mail that says "fixed some bugs". These summaries drive me nuts because they give me very little information. Attached with the mail will be the list of bugs fixed and the code that is changed, so I can usually grok what's happened. However, it's also the case that there are way to many bugs fixed and far too much code changed and you end up going "i don't really understand why we were broken before and I'm not really sure why we're fixed now". What's worse is that if, in the future, you get another bug similar to the one that was fixed you won't make the mental connection between the two and you won't know who to talk to and you won't know all the details involved.

I tried very hard on my check-in mails to explain what was fixed and what the problem was. I usually try to consider the fact that there are developers, QA and PMs (along with other interested parties) who will be reading it. So I try to address what the problem was (usually through a simple example of its symtoms), what actually caused it, how it was fixed, why i believe that the fix is correct, and how I think we should work to make sure that this doesn't regress (for example, more focus on testing a certain area of the code). Of course, sometimes I get lazy and don't go that much in depth, but I do try my best.

I do this because I know people are going to be reading these and I know that this information will be useful to some of those out there. I think that the realization that now these reports are going to be read by a lot of people will motivate people to try harder in this regard. I also think it will help improve the quality of the code. I'll explain why. During the process of drafting these mails I've been forced to think deeply about the issue so that I could explain it well enough for everyone to understand. While doing that I'll suddenly realize "hey! damn, i forgot about something" or "waitaminute... what about this case... I don't think that will be fixed". This actually happened today, and because of it I ended up doing 3 iterations over the code until I ended up with something that I felt was write because I could explain in depth what the original problem was and why my solution fixed it completely. If I had just written "fixed issue blah" or "fixed some bugs" I would never have realized that I wasn't truly fixing the issue, I was just fixing the symptoms that were presented in the bug reports, while not fixing a whole host of other ways for the bug to manifest itself.

Are these kind of interactions useful, or desirable to external customers?

Comments (9)
  1. AT says:

    I’ve discussed similar concepts with Windows Beta team.

    My original idea was to collect feedback regardless on current product development stage – design, development, testing or already in production. Do not impose time limits for feedback collection. Similar to 24 hours customers-service there must be 365 days accesible point of contact for _technical_ feedback.

    I was not talking about making this process public and open – I wish simply to improve current feedback channels.

    You currently see only benefits in public feedback.

    But there will be a lot of bad things as result of wide public and direct access to development teams.

    I’ve data from several 10000’s of bug reports filled on during XP Beta.

    Based on private beta bug reports quality from pre-screened/pre-selected group of testers I can extrapolate my results to completely-public beta.

    Here that will go wrong with public feedback tools:

    1. Increase of Noise/Signal ratio for bug reports. You will have 100’s of duplicates or incomplete reports for the same bug. Your teams will have to spend a lot of resources to validate/triage all of them or offset this burden to community.

    2. Increase of bugs not relevant to your product. For example for Windows user will report a bug about missing pixels in his LCD monitor, printer paper jam or simply user speculations about his sister computer not working correctly.

    3. Increase of product support requests instead of actual bugs. You will have numerous "Help me. My boss will come in 2 hours – but I need something to be done ASAP" or "I’m too lazy to read MSDN and search in Google – Can you answer my question ?". People will abuse your feedback system to get support instead of reporting bugs/requesting features (I hope you understand difference between this). Even more – they will do this successfully – >50% of their questions will be answered then their request will be closed by reviewers.

    Take a look on select user Channel9 "help-me" posting or complains from posters about support requests using email.

    4. Your teams will most likely lose focus from feedback of group of experienced users by making all people – 12 years kid and 30+ developer equal.

    Teams will pay the same attention to all problems people have. Probably this is some-that good – as you will listen to the same people who will buy your products. But any feedback from more experienced developer must be listened with higher priority compared to questions from newbie. Problems that experienced people was unable to workaround – will have big impact to all users, while newbie problems can have pretty easy work-around and resolved as by-design.

    5. You will be unable to collect feedback privately and will possibly loose competitive advantage.

    a) Any suggestion for *your* product improvement can appear in competing products first and take you out of market. This will have big impact to Microsoft with current 2+ years idea to market strategy.

    b) As well there is big privacy issues. Users will be unable to collaborate with you detailing their business scenarios, expected product usage. As well they will be unable to upload memory dumps, exception stack traces, source code snippets from real products they are developing.

    6. Average users will refuse to share their ideas to public (and as side-effect with company) if they will know that there can be personal attacks from others. Person will be scared to stand up and protect his idea in public. Consider yourself conference visitor or speaker. Or better – asking question during speech in full conference room or talking privately with person in hall.

    There is exist pretty trivial workaround for this – assign priority for each person.

    Internal company users – Priority 0,

    Top 100 Big customers like DOD, Coca-Cola or Intel – 1,

    1000 MVPs – 2,

    10000 Techical beta testers – 3,

    TechNet/public beta evaluators – 4

    Software Assurance customers – 5,

    all others customers – 6,

    future customers and other people – 7

    And consider feedback differently based on customer priority.

    But here is an question – How this is different from current situation ?

    First 0-4 priorities already have established (and working) feedback channels for preview/beta versions. I even know some succesful product design/specification review expirience.

    5-6 have PSS support contacts.

    Even 7 can freely use

    So ? What is this "Better Customer Connection" trend about ? Can you simply *improve* existing connection without reinventing the wheel ??

    Why do you need spend more talking about this instead of simply doing your job ?

  2. Adam Hill says:

    Yes, yes, yes!

    Another side effect of letting the outside world see a development processs, is *just how hard it is*. People think they can ask for feature X in Yand 4 months later it pops out. Regresion testing is hard, resource allocation is hard, performance is hard.

    Letting developers and users see all this is a very good thing.

  3. AT: A voting based system where people could help rank what they want fixed would also go a long way. IT would help us in deciding what order we want to fix things in.

  4. AT says:


    Voting will be probably fine – but

    1. Only after initial request review (so no lame reports will be passed to 1000’s of your web-site visitors)

    2. No anonymous voting (and no easy account creation)

    3. Not for bugs – but for new features only.

    As well – I do not think that you need to have democracy and any bugs not fixed because only 10 people voted for this. Microsoft have strong teams of result-oriented people. Mostly they make correct decisions.

    As for feedback process improvements – start from Joint Development Partners, Microsoft Vendors, MVPs, current testers groups.

    Only if this improvements will not work – make your company world-accessible.

    Just a note – taking in account highest BetaID I can remember now – you have over 400.000 people participated at least once in Microsoft Technical Betas. Why you are not using them effectively ? Do you expect people assisting you in your projects ?

    If you wish to post progress reports – start posting them to Priority 0 group – internal Microsoft users, collect feedback from them – then go to Priority 1 – JDP, collect feedback from them, then 2 – MVPs, and so on …

    Results ? You will have big and continuous improvements in return. Why ? Because each time new group of people will read latest corrected versions and get detailed (corresponding to current review phase) view on your problem, but trivial problems will be reported from smallest possible groups.

    0 group will give you core concepts/overview and fix basic flaws, 1 – will clarify major details, 2 – minor details, 3 – will tell you that they think about this, but 4-6 will simply vote …

    But if you will drop your report in public immediately – you will get 100 emails/reports about single issue. As well they will not read corrected report again (because they got basic information – and this is the only motivation for them).

    Something that will work for a small open-source project not necessary will scale well for Microsoft size.

  5. AT: Thanks. Thought sounds good for testing this out. We have internal newsgroups here and I think pushing these reports out to people might be helpful. I’ll ask around.

  6. AT: I’m not sure why large numbers would be a bad thing. Given a ranking system whereby the community self-regulated submissions, I could see this working rather well.

  7. AT says:

    Community-regulated software development process is not predicable.

    You will have spikes or 0-level activity periods.

    I agree what peer reviews is a must for software development project. It one of requirements for Level 3 SW-CMM level. But it’s big question – who must make such a review. Will it be ad-hoc non-regulated public, or it will it be group of people you will be able to measure and collect progress reports ?

    If you really need make something public – simplify process of moving people from low priority level to higher.

    Something like a current situation – customer reported at least one bug via PSS must be given direct access to TechBeta, user who actively support other peers in newsgroups must be given MVP status, company that work with products actively must be signed as JDP etc …

    This kind of openness will be measurable and will motivate people. Not a ad-hoc – come, suggested idea once, disappeared.

    Summary – I have nothing against your ideas.

    Even more – I support idea of information sharing and requested some kind of "What’s new" document (other that 2-pages MSDN article) in on 1 Oct 2003 (realy long time ago ;o)

    But any idea have benefits and drawbacks. It’s easy to find _possible_ advantages – but you will understand _real_ problems only after actual implementation.

    To make correct decisions you must analyze both sides of good vs. bad equation.

Comments are closed.

Skip to main content