One thing that I've come to truly appreciate: the balanced scorecard. Don't get me wrong: I've been using scorecards and dashboards for over a decade. I helped to build one at American Express. But I have come to see, from an executive level, why they are so freakin' useful... you can use them to hold people accountable to measurable strategic improvement.
With a scorecard, it is possible to reduce "passion-based decision making" from the organization, without requiring every decision to be based on Return-on-investment. (I like ROI, but only as a single measure within a balanced scorecard, not as the entire scorecard mechanism ;-). If everyone understands the mechanisms by which "organizational health" are measured, then it is OK to improve one measure at the expense of another if the final outcome moves toward "health."
In that vein, I'm looking at the measures that Enterprise Architecture should use to demonstrate alignment with critical IT strategies and business goals. We have to make sure that our work delivers value, and demonstrate that value, as part of our own scorecard.
The Corporate Executive Board, an excellent organization that brings together peers from across industry, put together a presentation on the various measures of Enterprise Architecture used by their member companies. I won't go into details, but it appears that the measures break down into four general areas:
- EA environment and activities -- this is what I call "Proof of Life" metrics. Useful process metrics, and you can put ranges on them to push general activity. These kinds of measures include "number of to-be architectures defined," and "number of business processes mapped." Unfortunately, if a metric is not properly aligned, these metrics can end up being little more than "looking busy." These metrics prove you are working, but not that the work is having a positive impact.
- EA compliance and adoption -- these are the "Proof of Effect" metrics. This is a lot closer to proving the case that EA is not only present, and busy, but having an effect. These include measures like "% of applications used by more than one business" and "% of projects compliant with EA standards" and "% of transactions that adhere to master data standards." Assumably, these are good performance indicators that can be rationally tied to business value. Having this measure is important. Having that connection to business value is also important. Note that the CEB study did not include two of the key measures that Microsoft IT finds important:
- % of Business Stakeholders that view IT as a trusted advisor and strategic partner, and
- % of Strategic Project Milestones reached on time
- Spending and Savings -- these are the "Cost Cutting" metrics. These are directly valuable to the business, as a single dollar of cost saved can go straight to the bottom line. This group of measures includes things like "savings from a reduction in interfaces," and "savings from standardized purchase agreements." You often need the "Proof of Effect" metrics to back up this group, to show that there is a correlation. Otherwise, you can leave open the possibility of having a really large impact, for which another group is given credit. For those of you involved in getting funding for EA, you'll recognize how perilous that road can be.
- Revenue and Profit -- these are the "Value Stream" metrics. These metrics are valuable to the company's stockholders in the most visible of ways. Metrics like these can include "revenue from new IT-enabled business capabilities" or "opportunity benefits of agility: revenue during time-to-market savings." Unfortunately, it can be a long road between "govern the standards or an IT project" and "increase revenue." At this level, EA can be part of a contribution to IT alignment agility and quality, which can be part of a contribution to Business agility and performance, which contributes to Business profitability. On the other hand, I think that these numbers are not the better measure of EA performance since the contribution can vary wildly from one project to the next, or even one quarter to the next, due to conditions that are completely outside of the control of EA (or even IT). In many cases, these measures are the "cinnamon air freshener" of the CIO's office. They smell nice, but vanish quickly, leaving behind no evidence that they were ever there.
Personally, I found this study useful on so many fronts. It gave me context, ideas, and key questions to answer. But now I'd like to ask you, the practitioner... what do you think?
If it were up to you to create a measure of Enterprise Architecture, what metrics would you collect? What metrics would you ignore?