Microthink: If you can’t measure it, then it doesn’t exist

At Microsoft, there is an obsession with measurement. If you can't measure it, then it doesn't exist. As a result, we set up data collection mechanisms, and try to interpret that data, even if the data isn't what we're really interested in, but we act as if it is. Because it's what we know how to do. (If all you have is a hammer...)

A classic example of this is trying to gauge the impact of blogging. Microsoft employees who are considering taking up the practice ask questions about measurement.

I want to measure the impact of my blog. I'd like to put a survey at the bottom of my blog that asks people "Did this blog posting prevent a call to Microsoft product support?" or "Was this blog posting helpful?" or "Rate this blog posting on a scale of 1 to 10." Then I can generate reports based on what people think so I can see how effective I am. Somebody in sales might ask "Did this blog posting convince you to buy a Microsoft product?" A developer might ask "Did this blog posting help you integrate your third-party product with Microsoft Windows?"

This smells like "I must make this quantitative and measurable so I can make it a review goal to increase my blog's 'impact' by 25%." In my opinion, blogging isn't like that. Blogging is more about creating an atmosphere. Sure, individual entries may solve specific problems, but the cumulative effect is the goal. Using a survey to measure the impact of a blog entry is like having somebody fill out a survey after you give them a ride home because you want to determine the impact that one action had on how nice a person they think you are.

Questions about measuring the impact of blogs will never go away because Microsoft is all about measurement. Many people believe that if you can't measure it, then you can't claim on your annual performance review.

Comments (33)
  1. John says:

    I rate your blog 8 3/7 (eight and three sevenths) out of 9.

  2. As a former MCS/FTE, I can REALLY appreciate this post. I think it points out something that exists throughout the corporate world. So many things that an employee does (or even does not do) has a direct relationship to the overall bottom line, but is not directly attributable.

    For an example an employee (who remains within the corporate guidelines but) causes a less then pleasant environment may very well responsible for another employee taking actions (such a leaving), with then impacts…..

    The Butterfly Effect is very real (although I dont think the movie was good), but I doubt that the corporate world will ever be able to compensate (or even acknowledge) the people who are the "root cause" of then end result.

  3. Jim says:

    Corp world is the one where common sense is not making sense at all. That’s why people who living in this world are changed as well, otherwise they would be dismissed in a short time.

  4. Anthony Wieser says:

    After a recent computer back to base repair from a major PC manufacturer, I was asked to rate the experience with a value between 7 and 9.  I didn’t know if the call center was misreading a 1 for a 7, or if it was really the range I should choose (8 seemed the only option given the description), so I declined to be measured.

  5. Marcus says:

    Man, that sounds frustrated Oo

  6. Messiant R says:

    I consider the amount of comments combined with the actual content of those comments the best measurement in case of a blog. A few things need to be kept in mind though:

    Not everyone posts a comment, not all comments are equally well thought through and some people will just find an excuse to flame.

    The actual convincing part however is that people can (try to) write exactly what they mean in their comment, you can get an idea about what the commenter thought while reading the post and writing their reply.

    A survey tends to be too shallow for that .. I always get the feeling that either all of the given options are completely irrelevant to me, or none really stands out enough to be selected. Surveys don’t really tell how much the selected options apply to the survey taker.

    And a belated merry christmas

  7. Jonathan says:

    Well, it’s formatlly part of the review training: It says that the goals you set have to be "SMART" – Something, Measurable, Attainable, something with R, Time-something.

  8. Yuhong Bao says:

    The Linux kernel developers have the same problem with performance, as Con Kolivas showed. If it is not quantifiable, they are not interested.

  9. YME says:

    Many months ago, I found a mistake on an MSDN page.  So, I tried to leave feedback explaining what the mistake was.  It insisted that I also rate the page by giving it a number of stars; otherwise, it wouldn’t accept the feedback.  Fine.  Whatever.  I gave it some stars.

    Well, it has stars now, but the mistake still hasn’t been fixed yet…

    (The page: http://msdn2.microsoft.com/en-us/library/ms224424(VS.85).aspx

    The mistake: If value is Empty, the return value is actually startIndex, but it says it’s 0.)

  10. Joe Chung says:

    YME, try leaving a note in Community Content for that MSDN posting with your explanation of the mistake so that others who view the page will at least know that it is incorrect.

  11. ATZ Man says:

    @Yuhong Bao,

    Performance in the Linux kernel is always quantifiable. Proposals simply have to show that context switch times would be reduced by by N nanoseconds or 5 bytes per swahoogle[1] would be saved in the swahoogle cache. This is not a blindspot on the part of the Linux people.

    It is just the tyranny of SMART goals. In some jurisdictions, such as the US, you can fire people at will as long as you have a paper trail of them failing to meet their goals. When the age-discrimination suit gets to court it has to look like the firing was objectively inevitable and there’s nothing like hard numbers for that. And the real innovation is to get the employee to make up the numbers and the standards, which makes it even easier to railroad someone, since the employee was pulling both numbers and standards out the back of his anatomy.

    [1] I don’t think the kernel needs any swahoogle optimizations, it’s just a hypothetical.

  12. JonDR says:

    It is the same way at the U.S. Government entity where I work. For awhile I was put on the same tracking as the computer techs and was rated for projects 1) Developing new methods (and implementing them) for testing quantile regression in statistics and 2) Implement GUI for ecosystem regional model whilst the others had 1) repair printer in RM3130 2) set up e-mail account for new employee Jill Jones.  So I entered daily status reports into the comments section (luckily they had an inordinately large field). The team lead kept looking at the large number of ticket items resolved by techs and my projects going on for weeks. We finally worked it out. I guess when it seems like magic to them, a small magic is indistinguishable from a large magic.

    Worse: I had to petition and beg to get to MSDN blogs.  Access to Usenet Newgroups (including ALL the comp.x.x. and sci.x.x groups) is denied.  

    What is *wrong* with these pointy-haired bosses?

  13. Cheong says:

    I also believe that if you try to measure everything, you’d lost the reason to do it at the very beginning.

    Unless measuring is part of something I’m doing, if others tell me to measure it, I’ll probably 1) tell them to measure it for me, 2) completely ignoring it, 3) stop doing that anymore.

  14. JamesNT says:

    This post makes you think about how many other MS employees Chen just pissed off today.


  15. Thom says:

    Sigh… I don’t know whether to pity people who think like that or pity the rest of us.  A blog that has impact is like porn – it resists most attempts to label or quantify it but we all know it when we see it.

    Raymond’s blog has impact.  At times it’s exciting, addictive, wanted, needed, informative, smart, snarky, humorous, present, past, authoritative, educational, motivational, … , fun.  In many ways one could equate it with Window’s developers’ porn.  I’m obsessed with it and keep coming back for more.

  16. Hmm…  while it sounds like they are trying to force a level of precision where one doesn’t quite exist, measurements of blog impact are looked at every day, at both the post and aggregate level.  Your blog, for example, shows trackbacks.  Your page has a tracking pixel (img src=”http://c.microsoft.com/trans_pixel.aspx?TYPE=PV&r=http%3a%2f%2fwww.techmeme.com%2f“ width=”0″ height=”0″ alt=”Page view tracker” />)  Even if ranking posts seems silly, at the same time you are trying to understand the conversation and usage around the basic units of the blog, the posts via techniques like this

    So, funny to post about surveys and such (ha ha), but the real question is: how do you propose to measure the aggregate quality of the blog experience you hope to create?  You do this to be funny, to share, to create joy or pain… how do you know if this is happening? How do you know if anyone reads it, or cares?

    That’s the measurement question.  Everyone’s a critic, but your post offers no suggestions.  Creating an atmosphere, a cumulative effect?  Great.  How do you measure that?  And don’t waste my time with “why do you need to?”  Because every reason for writing a blog, sooner or later, comes down to “is anyone listening,  does anyone care what I say, and do they care enough to respond somehow?”  And the answer to that question should the blog metric of choice.  

    [Do you apply a metric to “how nice a person I am”? How do you measure that? And don’t waste my time with “why do you need to?” Because every reason for being a nice person, sooner or later, comes down to “is anybody noticing, does anybody care what I do, and are they nice back to me?” and the answer to that question should be the niceness metric of choice. -Raymond]
  17. Dean Harding says:

    "How do you know if anyone reads it, or cares?"

    If that’s what you’re worried about, then you’re blogging for the wrong reasons.

    The suggestion Raymond is making is "don’t try to measure the ‘impact’ of your blog." That’s IT. Why must everything be "measurable"?

  18. JD Meier says:

    I agree the cumulative effect is the goal.  Thinking about our Microsoft blogosphere and porfolio of blogs — The sum is better than the parts.  I’m glad we didn’t measure one blog at at time.

    I wouldn’t mind the focus on bean-counting so much if I didn’t run into so many of the negative effects:

    * Failure to do the right things because you can’t prove they’re the right things to do.

    * Focusing on the "impact" at the expense of improving your game.

    * Focusing on the scoreboard instead of focusing on the pitch. (You play a better game, one pitch at a time.)

    Some things are easily and directly measurable, and other things aren’t.  Discounting the value and the impact of what we can’t measure is a recurring problem.  For the stuff that’s not easily measured, that’s the place where I think you have to trust smart people to make the right bets, live and learn.

    For anybody starting a blog, I recco they focus on their personal, compelling "why" behind the blog, before worrying about any measurements.  It’s the "why" that keeps you going.

  19. Toukarin says:

    I guess it’s common for everyone to attempt to quantity things to aid justification of things done, not realizing that sometimes success isn’t measured that way.

    Raymond has done a great job in blogging what he wants to write – and it’s so successful that people want to copy and model it, and attempt to gauge if they’re just as successful.

    I feel for Raymond especially when something he writes out of fun/passion/etc starts getting ‘abuse’ from nitpickers or by people slashdotting it every now and then, generating unnecessary irritation that pushes him to want to ‘threaten’ to stop blogging/ban comments every once in a while.

  20. Anon says:

    If it is any help tell them I’m 86.3% more likely to recommend Windows Vista to friends and family and 67.2% more likely to spend $50 or more per month on Xbox360 titles in months where I read this blog.

  21. Dave says:

    Another way to look at it is, "You get what you measure." If you try to optimize for the things that you do measure, you often sacrifice the things you don’t (or can’t) measure.

    For example, sales people are ultimately judged by their ability to make sales, and there’s an easily quantifiable thing to measure–money. But perhaps some of those customers feel like the buying experience wasn’t so good, and never come back to that place again. It doesn’t matter to the overly aggressive borderline-unethical sales person because they still get a good chunk of sales from the remaining customers that walk through the door.

    Complaints or customer satisfaction surveys are nowhere near as quantitative or measurable as money. It seems like a lot of people have complained about Vista, for example, but Microsoft seems to be making a lot of money from it. So is Vista a success or not? Money says yes.

  22. Mikkin says:

    Two misconceptions often go together:

    1)  If you can’t measure it then it doesn’t exist.

    2)  If you can measure it then it must be relevant.

    So, in order to justify something’s existence, the natural inclination is to create a metric and assume it has relevance. Even if there is no direct incentive to maximize the measure, the presumed relevance creates its own implicit incentive. Then you get what you incent.

    So if you measure page hits, people will quickly learn they can maximize traffic with salacious postings. If you measure the number of comments, controversial troll baiting will become the order of the day. Don’t even get me started on where polling leads – at least not in an election year.

    If the objective is to build community, and there is no doubt Microsoft benefits greatly from the community of developers, it is inherently difficult to measure in a relevant way. Any measure of blog impact can create an incentive to truncate the "long tail," thereby weakening the very diversity that is the strength of the ecosystem.

    It is not that there are no useful measures. It is just not very easy, and even the best measures are easily misused.

  23. Good Point says:

    Feel free to add this as one of your review goals.  I’ll rate all of your postings as ‘5 stars’ from this day onward.

  24. Mr Cranky says:

    Yeah.  You *do* get what you measure, so you’d better be sure of what you want to get.  Joel wrote on this years ago: http://www.joelonsoftware.com/news/20020715.html

    Coincidently, I just saw a Dilbert cartoon on the wall of an office where the PHB institutes a $10 bounty on fixing bugs.  Dilbert, Wally, & Alice are jubilant, shouting "Hurray! We’re rich!".  Last panel has PHB muttering, "I hope this provides the right incentive", while Wally exclaims, "I’m gonna write me a new mini-van!"

  25. Evan says:

    @ATZ Man: Performance in the Linux kernel is always quantifiable.

    Really? What benchmark do you use? Something desktop oriented? Web server oriented? DB oriented? Something that spends extra CPU cycles to decrease I/O costs, so it speeds things up on new, underloaded computers but slows things down on old, loaded ones?

    Sure, you can give numbers, but going from "here are numbers that apparently show why my replacement is better than the original" to "the replacement is definitely better than the original" is still an awfully big step.

  26. Stephen Jones says:

    —–"Do you apply a metric to "how nice a person I am"? How do you measure that?"—-

    Using negative numbers, and scientific notation :)

    I actually get annoyed at the "did this help your problem" question, particularly when you’ve got to reboot before you can tell, but at least that gives some kind of feedback.

  27. Aaron says:

    It’s quaint that somebody would actually want to apply a 10-second metric to a full year’s worth of *their own* hard work.  I can’t think of a less subtle insult to the craft.

    When I see a task that can be distilled into a couple of handy measurements, I see a task that can be automated.  If no decisions have to be made, then you don’t need a human to do it.  If decisions do have to be made, then the complexity of each decision has to be factored into the metric in order to make it useful, and nobody has yet come up with a way to measure that complexity.  It’s the quintessential managerial meta-problem.

    You can have two different writers post exactly the same article, and they will receive entirely different reactions, because they were written by different writers with different reputations and different personalities.  How does any metric account for this?

  28. Tom West says:

    In the defense of metric madness, have you ever seen what happens to an organization where there are no metrics at all?

    Using judgment in place of metrics works well when the organization is small (< 10 people) and full of dedicated employees, but by the time you have a large organization with most people just "doing their job", metrics can prevent massive wastage (at the cost of heavy wastage, of course).

    Of course, lack of metrics can allow for innovations that might be impossible otherwise, but at the cost of a higher chance of failure.  Since most large organizations fear failure more than they desire unexpected success, it’s entirely within corporate goals to go with a metric-heavy management methodology.

  29. Marc K says:

    There’s a story that goes something like this.  A manager of a nail factory was given a bonus based on quantity, so he switched production solely to staples.  The owner didn’t like this and started giving the bonus based on weight, instead.  The factory manager switched production solely to railroad spikes.

    The moral: You get what you measure.

  30. Miral says:

    "YME, try leaving a note in Community Content for that MSDN posting with your explanation of the mistake so that others who view the page will at least know that it is incorrect."

    I second that.  I once made a comment via Community Content explaining why a particular article was misleading (not actually wrong, but it didn’t cover all the cases) and it was fixed within a week.

    And yeah, on topic, I’ve seen a lot of these sorts of metrics cropping up.  They really don’t really mean much (I mean, how often do you rate the content in MSDN?  Then what chance does a blog entry have?).

  31. Funny…  I learned that you measure things to improve them.  I am pretty sure that if no one ever read this blog, Raymond would stop writing it after a while.  

    The atmosphere is made of speaking and listening, of responding, of interacting.  Blogging is more than just publishing, its creating a shared experience.  If all you are doing is yelling out to the nothingness, that’s noble, but wouldn’t you rather be part of the conversation?

    Measurement can help you understand what things you write create, contribute, enhance the shared experience.  And if you want to keep writing even if the measures show that no one is reading or caring?  Be my guest.  No harm, no foul.

    But just the same, there is no harm and no foul in trying to understand what people like and don’t like.  In trying to learn what things were more helpful or interesting than others.  I think everyone wants to know that, even if the only reason they write a blog is to scratch their itch to be nice, to share, to educate or contribute.  Feel free to ignore the metrics, or blog to them, as you wish.  

    Raymond had a point that the measurement proposed was wrong for what they were trying to track.  But to say that measurement of any kind of a blog is wrong is just as incorrect.  What you do with the measurements is the real issue.

  32. Chris Walker says:

    One of the founders of H-P said something like "If you can’t measure it, you can’t manage it".

    I often joke that at Microsoft if we can’t measure it, we won’t control it. This tends to push people to work on the visible measurable things and ignore the others.  Things like understandability of code (say for later developers), adaptability, code footprints etc are typically not worked on or rewarded.

Comments are closed.

Skip to main content