I caught up with fellow Microsoft Aussies Chris Vidotto and Steve Wright last night; most will know Chris from his new blog and baby like features.
Over a few beers, we got onto the topic of “What would it be like if a software product was viewed as a company?”. Jeez, why would you do this…well, we were talking about Microsoft Word. Steve brought up the concept of “Over-featuring”, having a product that is so feature extensive, that not all the features are used by most of the users. Now, I admit this is something which everyone has been familiar with for a while, in fact, it has always been a concern of product companies, will our next release have enough innovation to compel users to buy?
But what about the features that have been lying around for years, that still need to be maintained, but are only used by a handful of users on an infrequent basis. I proposed the concept of making each product feature responsible for its own cost (resources x time) versus its return (how much is it used). Now, tracking the first metric, cost, is possible within the product company, as you can capture resource planning and use. But the second metric, end-user feature use isn’t as easy. So I gotsta thinking…
How cool would it be if there was a .NET framework that a developer could use in their apps, where they could mark a method, or methods, with a “FeatureTag”. I was thinking along the lines of:
public void SearchForCustomer();
public void CreateNewCustomer();
Then when the end user runs the application, the measurement framework would start to collect both user behavior and feature usage patterns, and store all this in an XML manifest. On a regular basis, the framework would then connect to an aggregation web service based with the product company, and would dump all the data, without any customer or user profile info. I think the key is not who was doing what, but what was being done.
So then we could start to get a dashboard view of Feature X in terms of how much it costs to have it versus how worthwhile it it.
Now I know there are a few key issues, such as the performance of running the measurement framework (why add execution time to the user experience just for metric collection), and how feasible is it to have data being sent back to the product company?
My thoughts on that are, you could turn the framework on and off, so maybe for Beta or CTP releases, it would run, and for RTM, you could turn it off. As for the sending data back to base, we (Microsoft) already have this functionality in the way we do error reporting, so it’s obviously not that much of a hurdle.
Hmm, anyone who has seen anything like this, or has some ideas/comments/feedback, please let me know.