Why? Why! Why do managers make stupid decisions that cause devastating churn and tawdry results? And it’s not just managers, though they are particularly proficient at promoting poor performance—architects, leads, and individual contributors flood the lives of their team with wasteful, useless, misdirected activity, leaving us even less opportunity to deliver real value. What reason is there for this farce? Simple. We are optimizing—optimizing our obsolescence.

What kind of idiot optimizes their own undoing? The ordinary kind. You do it, your friends do it, and your boss does it. It’s all those good intentions that pave the way to disaster. We optimize the wrong behavior to achieve the wrong results. It’s wrong and avoidable, but hey, why think when you can cause mayhem with so little effort?

You want answers?

Let me save you some trouble and reading by giving you the answer first—optimize for desired results. It sounds simple and obvious, but people pervert that goal in so many imaginative ways that I better break it down word for word.

§  Optimize—Measure how good you are now, analyze how you could be better, alter your approach, and then measure again. Microsoft is great at optimizing, but we measure what’s handy rather than what matters. So, we optimize for the wrong result. You can read more about this in my column, How do you measure yourself?

§  For—Have a purpose. Optimizing for the sake of optimizing is purely self-gratification—don’t do it in public. Instead, be deliberate about your purpose. Think it through. Know what you are doing. Be a professional. Wake up.

§  Desired—Focus on what you want, not what you don’t. This is a common trap. People optimize around the problem instead of the solution. Bureaucracies and slow software are built upon this misdirection. They focus on controlling people or code to prevent the wrong behavior. Their focus should be on making the right behavior fast and easy, and then catching the exceptions.

§  Results—Never optimize a step or algorithm in isolation. Instead, optimize the end result you seek. We have all experienced the impact of local versus global optimization. It kills our efficiency and innovation; it’s killing our planet. Yet over and over again, people can’t see beyond the problem at hand to consider the outcome they’re truly after.

Can you recognize when you are de-optimized? Let’s run through some examples and check.

Eric Aside

I realize it’s a little confusing to talk about optimizing both code issues and people issues in the same column. I couldn’t resist because the number of similarities is startling. If it’s easier for you just think about whichever problem you prefer.

I think I can handle this

How do you handle run-time errors in your code? How fast does your code run when no errors arise? Is it a smooth or bumpy ride for error-free operation? The fastest, simplest path through your code should be the 80% case, not the 20% case. However, that doesn’t mean you shortchange error handling; you just don’t optimize around it. Trust your code will run error-free, making it run fast. Verify it was error-free, ensuring the right result. Trust but verify.

Likewise, how do you handle people and process errors? Do you check their every move? Do you have people do it your way, jump through hoops, and fill-out redundant forms to ensure they aren’t cheating? Or do you trust people to do the right thing—clearing the desired path of obstructions, and later verify that work was done properly? Trust but verify.

My altruistic readers, including managers, might claim that they do trust their coworkers. Really? How did you react the last time something went wrong? Did you quickly fix the root cause and move on, or did you start an inquisition randomizing your team for days or weeks? Do you micromanage or do you delegate? Do you specify every step or do you specify the result? Trust is hard. Luckily, you’re being paid.

Déjà vu

How decoupled and cohesive is your code? Are the classes, functions, and functionality all intertwined and unmanageable, or are they independently testable and separable, each piece having its own purpose and task to perform?

Well-architected and layered code is far easier to test, maintain, and enhance. However, it doesn’t perform quite as well as tightly coupled code. It’s a tradeoff. If you optimize purely for speed, you eventually get un-maintainable spaghetti code. If you optimize purely for architecture, you can’t be competitive in performance. How do you strike the right balance?

Most teams don’t strike a balance between architecture and performance—they ride a rollercoaster:

1.      The team starts with a nice architecture. It works great and everyone feels good.

2.      They optimize it for performance. Now it works better—the original team clearly wasn’t as sophisticated.

3.      The code is unmanageable, it can’t be enhanced, and performance has hit boundaries, so the team painfully refactors the code. Now it’s manageable again and everyone is happy—the prior team clearly were neophytes.

4.      The performance isn’t competitive, so the team optimizes again for performance. Now it’s competitive again—the prior team clearly had lost its way.

5.      Now the code is unmanageable, so return to step 3.

There’s another variation—the code is so twisted that team can’t fathom refactoring. Their product cycle keeps getting longer and the code keeps getting slower, requiring more memory and processing speed. That’s a popular variation.

The right approach is to optimize for desired results—performant code that’s easy to maintain and enhance. Instead of just measuring the speed (easy), you measure the speed and the code complexity. You seek the optimal balance of both. If you’re really sophisticated, you’ll also measure team health and customer satisfaction indicators, seeking a balance of all four. Wow, that’s almost like running a business.

The beat of a different drummer

Let’s try one more subtle example—product team structure. It’s a war zone out there between traditional product development and the upstart Agile adherents. Who’s right? Who cares! Never optimize around a step in isolation—optimize for desired results.

The desired result is delivering the most customer value in the shortest time. Remember, customer value is not measured by feature count; it’s measured by delivering delightful end-to-end scenarios with high quality.

So how do you deliver high value quickly? You apply the Theory of Constraints (TOC). TOC says that the fastest way a project can accomplish anything is constrained by the slowest step. Say your user experience, content publishing, and operations teams are shared and can scale to your needs; your PM team can spec an average of four features in a month; your development team can code two features in a month; and your test team can validate three features in a month. There’s not much point in your PM team going full speed, is there?

Yet managers will push the PM team to keep writing specs the dev team can’t process—optimizing locally instead of globally. Adding people to speed up the dev team doesn’t work either (note The Mythical Man-Month and the economy)—again, the focus is too narrow.

The right solution is to pace the PM and test teams to the dev team. Put in buffers to account for variability between features, but never have the PM and test teams outpace the dev team. This TOC strategy is called Drum-Buffer-Rope. Because it’s hard to precisely predict the dev team’s pace, you constrain the size of buffers, avoiding too much work in the dev team’s queue should the situation change.

This is why Feature Crews work so well. You’re optimizing for the desired result—working scenarios. In Feature Crews, an approach from Office, PM, dev, and test team members tie themselves to one piece of a scenario at a time till it’s completely tested and integrated. They can’t get ahead of each other. Versions of Scrum and Extreme Programming work the same way. It’s not the combined teams that are essential (though communication is easier); it’s the pacing of work together that optimizes the delivery of complete, high-quality customer value.

Don’t panic

It’s so easy to get caught up in the immediate and optimize around the issues directly in front of your face, instead of the ones you actually care about. People do it all the time—I guess we’re programmed instinctively that way. That’s a perfectly good reason to optimize the wrong behavior for the wrong results, but it’s a poor excuse.

You should know better, and if you don’t, you have no right to draw a paycheck. Consider the result you desire to achieve; think it through; measure a balance of factors; and optimize as a whole. It’s not that difficult. We attempt it every day as we balance our lives. The key is to be deliberate rather than juggle; to be planned rather than panicked. You can do it if you simply keep your sights on the finish line.

Comments (2)

  1. asymtote says:

    Scrum and Extreme Programming just seem like software engineering versions of the Adkins diet to me. They are a fad that will die out once all the product-pushing consultants have extracted the easy money and moved on to something else.

    Here’s the simple truth. If you want to lose weight then consume fewer calories than you burn. If you want to increase the productivity of a software development team then improve the communications between its members. The reason why feature crews work so well is because communications efficiency has been made the top priority.

    All communication systems can be characterized by the number of connections in the system, the end-to-end transmission latency and the overall data throughput between endpoints. Feature crews reduce the number of connections by (typically) keeping the teams small. Latency (or the time taken by a member to initiate a communication and/or to respond to one) is reduced because each member of the team is actively working on that feature (i.e. minimal priority conflicts, context switch time etc.) and the throughput (which can also be thought of as the value of the information being exchanged) is high because of the shared responsibility to ship the feature.

    As an intellectual exercise try going through every software development process you’ve been involved with and make an assessment of its productivity level and of the emphasis it put on human communication efficiency. Not only do I suspect you’ll find that the two measures are correlated but I believe that there is a direct causative relationship.

  2. Rick says:

    This reminds me of the old hardware engineering adage "I can make it cheaper, but its going to cost you".  Often, the investment in optimization overwhelms the return.

    My experience in software engineering projects is that the overhead of a more complex process can overwhelm any gaines acheived by the slight improvement it brings.  Every minute I spend "modeling" my project instead of actually doing my project is overhead.  There is little that can replace simply staying on task a higher percentage of the time.

    In my younger days, I frequently asked job candidates how many 100-hour weeks they had ever worked in their life (if any).  If they ran out of fingers trying to remember and count them up, that was a good answer.  Nowdays, I plan projects where we assume only 20 hours of productive work per week.  

    I know that achieving more than 20 hours of productive work per week may conflict with some people’s idea of an appropriate work/life balance, but if you consider your life’s work to be creating software, then its a lot less alarming.  

    The other key is knowing what is the "task".  This is where the communication mentioned by the previous commenter comes in.  Its also where experience and having people who live and breathe this stuff makes a huge difference.  They already know a lot of the answers and less communicaiton is needed.

    The above two concepts lead to a simple solution to optimize the creation of software:  Hire software engineers who live and breathe this stuff and consider it their life’s work, and want to spend more than 20 hours per week doing it.

    – Rick