Avoiding Do Overs - Testing Your Key Engineering Decisions

I noticed Rico has a Performance Problems Survey.  From what I've seen, the most important problem is failure to test and explore key engineering decisions.  By key engineering decisions, I mean the decisions that have cascading engineering impact.  By testing, I mean, doing quick end-to-end tests with minimal code that gives you insight into the costs and glass ceilings of different strategies.

When I was working our developer labs, I would work with around 50 or so customers in a week.  I had to quickly find the potential capability killers. To do so, I had to find the decisions that could easily result in expensive do-overs.  If I took care of the big rocks, the little rocks fell into place.

Here's the categories that I found to be the most useful for finding key engineering decisions:

  • Authentication
  • Authorization
  • Auditing and Logging
  • Caching
  • Configuration
  • Data Access
  • Debugging
  • Exception Management
  • Input and Data Validation
  • Instrumentation
  • Monitoring
  • State Management

As you can see, there's a lot of intersection among quality attributes, such as security, performance, and reliability.  One of my favorite, and often over-looked capabilities is supportability (configurable levels of logging, instrumentation of key scenarios, ... etc.)  This intersection is important.  If you only look at a strategy from one perspective, such as performance, you might miss your security requirements.  On the other hand, sometimes security requirements will constrain your performance options, so this will help you narrow down your set of potential strategies.  In fact, my challenge was usually to help customers build scalable and secure applications.

By using this set, I could quickly find the most important end-to-end tests.  For example, data access was a great category because I could quickly test strategies for paging records, which could make or break the application.  I could also test the scalability impact of flowing the caller to the database or using a trusted service account to make the calls.

To contrast this approach of end to end design tests versus typical prototyping, it wasn't about making a feature work or showing user experiences.  These architectural proofs/spikes were for evaluating alternative strategies and litmus testing capabilities to avoid or limit downstream surprises and do-overs.