Applying Value Up at Microsoft

Let me start with thanks for Rob Caron‘s persistence in encouraging me to blog.  I haven’t blogged since the initial Team System announcement at Tech∙Ed in 2004 (Announcing Visual Studio 2005 Team System).  I promised him that, as soon as my book went to press, I’d start a regular column.  The book comes out this week, and I’ll be speaking at two conferences –VSLive and STAR East – so now seems an auspicious time. 

In my book, I discuss what I call the value-up paradigm.  In short, this is the team’s focus on flow of customer value as the driver of a project, as opposed to the work-down through a list of planned tasks.  In the book, I list seven characteristics of this paradigm, describe its implications for each of the disciplines on the team, and the implementation with Team System.  Here’s the table from the book.  (If you’ve read either of the current versions of MSF, the value-up concepts should be very familiar.)

Core assumption

Work-down attitude

Value-up attitude

Planning and change process

Planning and design are the most important activities to get right.  You need to do these initially, establish accountability to plan, and monitor against the plan, and carefully prevent change from creeping in.

Change happens, embrace it.  Planning and design will continue through the project.  Therefore you should invest in just enough planning and design to understand risk and to manage the next small increment.   

Primary measurement

Task completion.  Because we know the steps to achieve the end goal, we can measure every intermediate deliverable and compute earned value running as the % of hours planned to be spent by now vs. the hours planned to be spent to completion. 

Only deliverables that the customer values (working software, completed documentation, etc.) count.  You need to measure the flow of the work streams by managing queues that deliver customer value and treat all interim measures skeptically. 

Definition of quality

Conformance to specification.  That’s why you need to get the specs right at the beginning.

Value to the customer.  This perception can (and probably will) change.  The customer may not be able to articulate how to deliver the value until working software is initially delivered.  Therefore, keep options open, optimize for continual delivery and don’t specify too much too soon.

Acceptance of variance

Tasks can be identified and estimated In a deterministic way   You don’t need to pay attention to variance. 

Variance is part of all process flows, natural and man-made.  To achieve predictability, you need to understand and reduce the variance.   

Intermediate work products

Documents, models, and other intermediate artifacts are necessary to decompose the design and plan tasks, and they provide the necessary way to measure intermediate progress.

Intermediate documentation should minimize the uncertainty and variation in order to improve flow.  Beyond that, they are unnecessary. 


Troubleshooting approach

The constraints of time, resource, functionality and quality determine what you can achieve.  If you adjust one, you need to adjust the others.  Control change carefully to make sure that there are no unmanaged changes to the plan.

The constraints may or may not be related to time, resource, functionality, or quality.  Rather, identify the primary bottleneck in the flow of value, work it until it is no longer the primary one, and then attack the next one.  Keep reducing variance to ensure smoother flow. 

Approach to Trust

People need to be monitored and measured to standards.  Incentives should be used by management to reward individuals for their performance relative to plan.

Pride of workmanship and teamwork are more effective than individual incentives.  Trustworthy transparency, where the whole team can see all the team’s performance data, works better than management directive.


When I joined Microsoft in 2003, I began driving the value-up approach to planning, managing and implementing Visual Studio Team System.  At the time, it was a big change for most of my 200 or so Team System colleagues, but we were a new project with strong leadership and a clear charter to focus on our customers’ requirements, breaking through a decade of stagnation in the market. 

Over the last several months, I’ve been heads down mentoring Developer Division on value-up planning for the Orcas release.  We’ve been trying to repeat the cycle on a scale ten times larger as we’ve been operationalizing this in Developer Division.  Along the way, there have been interesting issues of scale and culture change that I’d like to share.

Scenarios, Value Props, Experiences, Features

Dev Div is an organization conditioned over two decades to think in terms of features.  Define the features, break them down into tasks, work through the tasks, etc.  The first step in shifting to the value-up paradigm was to take a holistic and consistent approach to product planning.  We introduced a taxonomy of functional product definition that covers end-to-end scenarios, value propositions, experiences and features.  For each level, we used a canonical question to frame the granularity.  We rolled out training for teams, similar to Chapter 3 (“Requirements”) in my book

Conceptually, the taxonomy looks like this. 

End-to-end Scenarios

Each end-to-end scenario is targeted at a particular customer profile and is designed to capture a vision of enough business value for customer to decide to purchase or upgrade to the new version.

Value Propositions

In an end-to-end to scenario, we start by considering the value propositions that motivate customers (teams or individuals) to work with our platform and tools. We consider the complete customer experience during development, and we follow through to examine what it will take to make customers satisfied enough to want to buy more, renew, upgrade, and/or recommend our software to others.

A value proposition is a way of defining tangible customer value with our products. They address a problem that customers face, stated in terms that a customer will relate to. A value proposition is represented in the following statement that a customer might make: We would work with your product if it helped us to [value proposition].


Value propositions translate into one or more experiences. Experiences are stories that describe how we envision users doing work with our product: what user tasks are required top deliver on a value proposition?


Experiences in turn drive features. As we flesh out what experiences look like, we spec the features that we need to support the experience. A feature can support more than one experience. (In fact, this is common.)

We also created two value props that didn’t really belong to scenarios, called “Legacy Qualities of Service” and “Remove Customer Dissatisfiers”.  To manage this data, we set up a team project in our Team Foundation Server that we call (inappropriately, but for historical reasons) the “feature directory”.  We have separate work item types for each of the value proposition, experience, and feature.

Loading the Train

Key issues in every software project are scheduling and managing the backlog and release scope.  We had to develop rules that we could apply across a product of this breadth for prioritizing the envisioned functionality at successive levels of granularity.  We used three categories, Critical, Value Add and Incubate, to prioritize the scenarios, value propositions, experiences and feature.

  • Critical value props are ones around which we would build the schedule.  In other words, these are value props that we cannot ship without and we will adjust schedule and resources to make them fit.
  • Value Add value props (awkwardly named, I agree) are ones we want to deliver in the release, but for which we won’t adjust the schedule.  They get resources after the critical ones.
  • Incubate value props are ones that we plan for subsequent releases from the beginning. 

We stack-ranked the value props by scenario, applied this rating, proceeded to elaborate the experiences within each value prop, stack-ranked and rated the experiences by value prop, and repeated the process for features within experiences.  We vetted these heavily with customers at conferences and special meetings, and with non-customers in focus groups.

Next, we addressed the risk and cost of the features (to 3-day granularity) and segregated them into high- and low-confidence buckets.  Then we reassessed the experiences, so that only those experiences that still hold together with high confidence, based on the costing of the features, achieve high confidence.  Along the way, we’ve continued to vet these experiences with customers, which included their review of the experience specs.

This gives us a release backlog, stack-ranked within value prop and by contributing team, of critical and high confidence experiences and features.  We use these to lay a planned iteration schedule, and we can measure its completion on a daily basis at each level — in terms of features, experiences or value props (In practice, there are a few other scheduling factors, notably dependencies across teams and resource availability, but I’m ignoring them here for simplicity.).

At this point you might be screaming, “How waterfallian!  What BDUF!”  Actually, no.  We’re managing the release in a series of 5-week iterations and in most cases, we’re doing detailed design only an iteration ahead.  Each iteration will produce a Community Technology Preview (CTP) as a deliverable increment of working software.  The learning from each iteration will feed into the design of the next.

At the same time, we do have a clear target, a backlog stack-ranked and understood in customer value, with a clear delineation of critical and value-add functionality.  We have rules for revising the ranking at iteration boundaries.  We also have transparent, daily assessment of progress. 

Quality Gates

In a key shift to value-up management, we have abandoned any measure of being Code Complete in favor of being Feature Complete.  Feature Complete gets measured by passage of Quality Gates, which capture quality-first practices of process like MSF and XP.   Given the amount of literature around Code Complete, you’ll recognize the significance of abandoning it in favor of the Feature Complete measure.

Feature Complete attempts to measure incremental customer value and keeps the whole product in working order as it evolves.  In addition to executing tests (unit and integration) with code coverage requirements, Quality Gates check for key qualities of service, such as security, performance, localizability, and usability.  The project is too big for “common code base” (i.e., the use of a single source branch).  Rather, we one Main for integration, and each “feature crew” has its own working branch.  Although each feature crew has full control over its private branch, the Quality Gates are applied stringently on delivery from the feature branch into Main. 


Because we are using TFS, everyone can access daily reports that measure the forward progress and status of the release.  AFAIK, it’s the first time at the scale of Dev Div that we’ve been able to see progress of customer value as it’s been implemented.  We’ll use this to communicate the CTP contents as well, so that you know what to look for in each increment. 

What’s Next

I realize that I’m making everything seem easy and seamless, and of course, it hasn’t been.  One of the key learnings echoes DeMarco & Lister – If you don’t have enough time, start earlier.  We began planning Orcas very late, and because the product teams were heads down finishing the 2005 release (“Whidbey”), we did not engage them broadly until after a great deal of envisioning had been done.  We made this choice knowing the risk, but underestimated the amount of time it would take to reset everyone’s thinking to a common level.

We started the first iteration of building the product, so we’ll be managing change, measuring value and velocity using Team Foundation Server.  We’ll prove the Quality Gate model over the next iterations.  I’ll keep you posted.