How I judge the "OneNote Testing" blog

 

One of my commitments which I added to my list here at work is maintaining this blog. It's been almost two years now and while it's been quite a bit of fun, I sometimes feel the need to quantify how effective this has been.

 

One of my goals has been to write a blog article every Monday and Thursday that I'm in the office. Days off don't count - I want this to be fun for me, first of all, and not a burden. I know I've missed a deadline by a day now and then, but in all those cases, I've had high priority work interfere. Although this blog a written accountability for me, it has to give way to higher priority testing tasks every so often.

 

The next most obvious statistic to use would be number of readers. At first glance, this seems intuitive - the more readers, the better. But for a variety of reasons, the ability to track the number of readers of this blog can't really be done to any known degree of accuracy. My best guess is that I have between 3,000 and 15,000 "regular" readers. Powertoy articles typically see a big increase in those numbers. For what it's worth, the image rotator has worked its way up to the most frequently viewed posting, overtaking the Task Requests from Meeting Notes which had previously seen the most downloads.

 

Even if there were accurate statistics, I would be left with the question of how many readers vs. how many viewers I have. Readers are defined as those that actually look through what I write, viewers are those that (for instance) subscribe via RSS and don't actually read the article. So this never struck me as a good way to evaluate myself.

  

The more I looked at easy to measure statistics, the less I viewed them as useful to evaluate effectiveness. This is a much more general statement that applies to many different situations: for instance, the number of bugs I open is very easy to track. I don't want to make a goal to open X bugs, though, since I could very easily get focused on quantity rather than quality. I could get so caught up in finding (for instance) 100 bugs in a month that all I look for is typographical errors in strings and miss fewer bugs that are more severe. After thinking them through, every statistic I could track like this eventually felt wrong the more I thought about quantity vs. quality.

 

So what I am working on is my Baseball Tracking concept for my blog. I think of each article as an "at bat." If an article generates comments and/or email to me, it's a single. A double would be an article that gets referenced elsewhere, or someone says was "great." Triples would be positively received powertoys since they presunably have a positive effect on OneNote users. Home runs are few and far between. I suppose our intern from last summer, who initially contacted me through my blog would count - hiring is always tough.

 

I'm still working on the outline of this in my head.  I like the analogy.  It's easy to understand and my goal would be to have about a ".300 average" over time.  I'll think about it some more and see what I can come up with.  Feel free to post any ideas you want below.

 

Questions, comments, concerns and criticisms always welcome,

John