Quantum Testing

Once upon a time, I thought testing was about finding bugs.

Once upon a time, I thought I should be able to find every bug in my product.

Once upon a time, I thought testing was about ensuring no bugs escaped to customers.

Once upon a time, I thought every single test should be automated.

One day I stopped testing for awhile and thought about why I thought these things. I started by making a list of every reason I could think of to automate a test. It was a long list. Reviewing it, however, I realized every reason boiled down to one of two basic reasons:

  1. I wanted to be notified if a defect ever (re)occurred.
  2. Automating the test was faster than doing it manually.

This got me thinking about why I was testing in the first place. Soon I realized that I wasn't testing to find bugs - I was testing *because* defects had been found and the team wanted to know how many other defects were present.

Upon further consideration I realized that was not exactly correct. I had learned through experience that I would never find every defect. I had also learned through experience that my management did not expect me to find every defect.

So why was I testing?

Aha! I was testing so that I could provide my opinion as to whether the product was ready to ship or not!

Upon further consideration I realized that was not exactly correct. I had learned through experience that my opinion as to whether the product was ready to ship might be overruled by people up my management chain.

So why was I testing?

Several similar cycles later, I came to a conclusion:

My team is building a product. The team is composed of human beings. Human beings are fallible and make mistakes, thus the team is fallible and will make mistakes. Some of these mistakes will take the form of defects. Some of these defects will prevent our product from serving its intended purpose well enough to meet the business goals my team has for our product. I am testing in order to provide information regarding how well our product serves its intended purpose. This information is used by people up my management chain to decide whether shipping our product or taking additional time to refine our product will provide the most business value.

Once I spelled this out all sorts of things suddenly made sense. For example, "refining" might mean fixing defects. It might also mean adding additional features, or expanding existing features, or cutting completed features. Now I understood why each of these might occur a week before our scheduled ship date. Now I also understood why we might ship with what I considered heinous problems.

With this realization I started re-evaluating everything I did in terms of business value. My quest to reduce the cost of UI automation stemmed in part from this, because lowering that cost meant my team and I could complete more testing in a shorter amount of time and thus provide deeper information to the people up our management chain more quickly. And in fact that has turned out to be true.

Of late, however, I find myself thinking that continuing this quest may not be worth the investment. The changes we have wrought seem to me small, especially in the face of the exponentially exploding complexity of software today. I find myself questioning the business value of all the time I spend automating tests, and updating them to keep up with the product they are testing, and fixing defects in them and the infrastructure they use. This time seems to me better spent using my brain to identify the biggest risks to the business value my product is meant to create, working to prevent these risks from reifying, exploring my product in search of triggers for those risks, and - yes - crafting automated tests as seems appropriate.

Of late, however, I find myself questioning the business value of even this approach. I do not see how it can keep up with the exponentially exploding complexity of the software which will be here tomorrow. I feel as though there is a quantum leap I can make which will put myself ahead of this curve. I have not found it yet. I continue to search.

If you have any ideas how to find it, please let me know!

*** Want a fun job on a great team? I need a tester! Interested? Let's talk: Michael dot J dot Hunter at microsoft dot com. Great testing and coding skills required.

Comments (24)

  1. I think the quantum leap you’re looking for lies with the developers. The only way to get ahead of the automation curve is to automate before development starts. Then we’re getting into requirements based testing which I think is far more cost effective.

    The way I see things is that I used to look for bugs, now I look for ways of preventing them from being created.

    I realize that someone will always be looking for bugs but I think far too few people are looking at how to prevent them from happening.

  2. I’ve been thinking about this a lot lately myself.

    I’ve heard testing defined variously as "finding bugs, being a customer advocate," and "reporting on <i>QUALITY</i>," but it seems to be "whatever the rest of the people in your group think it is that you do."

    My way of thinking comes from a scientific point of view.  I cannot "prove there are no bugs," because I can’t prove a negative.  I can, however, prove that under certain circumstances the product does what it is defined to do.  Thinking carefully about what those circumstances are, and forcing the team to come to consensus about what the product is "supposed to do" are useful ways to spend time, and aren’t necessarily done by anyone else.

    I feel like I could probably write a book about my thoughts here… but a couple of questions I feel that I should be able to answer better than anyone else on my team:

    1. As it exists right now, what are the best and worst parts of this product/feature from a user’s point of view?

    2. What will cause this product/feature to fail?

  3. Michael Bolton says:

    >I feel as though there is a quantum leap I can make which will put myself ahead of this curve.

    The quantum leap comes, I think, when we recognize that we can’t know everything, but that we can learn some things, and provide value by learning things that other people haven’t learned yet.  Ironically, I doubt that this learning is itself informed by quantum leaps, but more likely by steady progress–lots of tiny leaps, rather than one big one.

    I further don’t think that it comes from increased automation per se, but rather from increased sapience (http://www.satisfice.com/blog/archives/99).

    —Michael B.

  4. Jerrad: I think it is more than getting my developers testing. My developers do write unit tests. And still I feel this way…

  5. Jim Bullock says:

    This: "My team is building a product. The team is composed of human beings. Human beings are fallible and make mistakes . . . "

    Exactly so. Of course, the biggest leverage is when you can change the practices and environment based on what you discover from testing. Otherwise you’re stuck in a silly codependency. Folks making stuff mess up the same way. Folks finding stuff find the same stuff the same way. Wash, rinse, repeat. Boring. Also dumb.

    *Any* testing not also part of ongoing SPI is just silly. Doing so does lead to job security, however.

  6. Massif says:

    As my friend said when I told him I was testing a large application at work:

    "Why test? Why not just prove it works?"

    But what else do you expect from a Haskell programmer? He had a point though, code which can be logically proven would save a hell of a lot of overhead on testing.

    Then you’d only be left with proving that you’re meeting requirements.

  7. Roger Foden says:

    How about putting some automated test into the product itself, built-in as an intrinsic element of the product’s architecture ?

    Try to give the product an idea of when ‘something’ is wrong with the system, even though its components think everything is ok locally. This might apply especially when the ‘exponentially exploding complexity of software’ is due to integration between things that are themselves quite complex.

    Build the product (and its architecture) so it will continue to function in the presence of defects, and not crash at the slightest problem.

  8. Jim Lang says:

    I don’t know that you can get there from here. Not consciously, at least. You spend time boiling your thoughts, ideas, methods down to their most elemental, which gets you down to (about) the atomic level. What you need is a way to go from atomic to subatomic.  Find the protons, neutrons, and electrons that make up those elements.

    I don’t know what they are, or how to get there. I’m just coming to terms with the value of testing in a test-less (not test-free) environment, so I haven’t walked your path. But I see where it leads. Wish I could point you in the right direction. In the mean time, I’ll be glad to follow.

  9. Pete. says:

    If you are a soldier is it your place to question the business value of taking any particular hill?  Do you believe that you are (or should be) a soldier, and not a general?

    What is the rate of increase in defect costs compared to the rate of increase in software complexity?  Relative to our competition?  Relative to our ideals?  What is the business value of each?

    What if we could reduce the cost of making mistakes?

    What if automation wrote itself (I’m not joking here), that is, if the product knew how it was supposed to work and could find its own defects?

    What would it take to make that vision a reality?

  10. I think the key is that you have to realize that you can’t test properly because you are just one person. In your post you talk about how you feel your role is to actually evaluate whether a product is "good enough to ship", in other words your role is to ensure that your product is not slaughtered in the arena of public opinion.

    Instead of focusing on you trying to be some sort of clairvoyant that can divine what the customers like or dislike about your product, why not set up some sort of paid feedback focus group? Your role would then be marshaling that feedback into its proper container.

    I would then take my testing group and break them up into areas of responsibility:

    tester 1: Development bugs

    tester 2: Feature Requests

    tester 3: General feedback

    Bugs that the focus groups find could be send tester #1, who would then "own" that bug and work out it’s priority based on effort to fix and business value. Sort of like a burn down list in SCRUM.

    As for the people in the focus groups, you could offer them features or areas to play with and a micro-site that they can use to submit feedback through. That way you can enforce people to look at a specific area of the site and you are not holding back development waiting for people in your focus group to look at a feature that was just developed.

    Hmmm… comment is getting kinda long. Maybe I’ll blog about this or something.

  11. Roger: Building testing in to the product is an interesting idea. Companies like IBM have been working on self-diagnosing and self-healing hardware and software for decades. Fifty percent or more of the CPU budget on the B2 bomber is dedicated to testing the system. Doing the same for applications would be an interesting experiment.

  12. Pete: Reducing the cost of making costs seems likely to help. TDD and Agile are aimed at this I think. Tests writing themselves is an interesting idea. Model-based testing is an early form of this. AsmL [http://research.microsoft.com/fse/asml/] is an attempt to formally define functionality. This would indeed be an interesting avenue to pursue.

  13. Matthew: Using our customers to do our testing is an interesting idea. This is sort of what beta testing is all about, although that feedback generally comes too late to do fundamental changes to the product. I have been on teams which have worked with a select set of customers, giving them early builds of our software and incorporating their suggestions into our product. Their involvement definitely made our product better. And I have never been on a product team that wasn’t happy to incorporate customers’ documents and such into their pool of test data. A similar interesting idea would be to allow customers to submit automated tests into a team’s automation pool.

  14. JIB says:

    I took some time and such considerations come into mind:

    1) Once i thought – software can and will do finite things (it’s by digital nature of software and digital 🙂 computers). So it can be completely tested and be bug free (finite states and finite paths of the software). So where is the problem, it just will take too much effort (in all means) to test all the states and paths.

    2) also this: Automate or not process of finding bugs also is buggy.

    3) "Exploding complexity" brings even more states and paths. It’s thermodynamics law – entropy rises. This brings us to this – human beings are no way perfect (simply can’t be), why software should? It’s like evolution of human brain – of buggy, unstable blocks created something that is stable enough to perform certain tasks in certain circumstances.

    4) the only way is to make buggy software 🙂 Actually term "buggy" is inappropriate in such environment. This is sounds very similar (at least by name) with that IBM research mentioned here earlier. Other companies and research teams working on it too.

    5) HOW it can be achieved? Or at least how it suppose to work? Here fantasy and brainstorm kicks in. It would be fun to take part of it. BUT some things are clear – it has very much different approach to requirements as is. Possibly end of requirements and specifications as we know them – like human – they had constraints, no requirements and it created mind and soul. Or may be no.

    The only quantum leap possible I think. All other routes are brute force infinite loop:

    – new software should be more complex,

    – developrs will have better (buggy) hardware and tools (also buggy). Add more developers and testers.

    – will create even more bugs

    May another possibility is anthropological – software (and hardware too) will reach certain level of complexity and will STOP. It just will do everything what human beings will want. But this is quite a doomed scenario.

  15. Michele says:

    Hi Michael:  I was recently fortunate enough to attend the Rapid Software Testing course, delivered by Michael Bolton.  One of the most helpful things (of the many that were given) was delivered in one word DE-FOCUS.  Perhaps you are focusing too hard on the goal you want to achieve.  Pull away from it and do something completely off the wall.  Make up your mind that you will not look at the whole wheel, but just one of the spokes… maybe the biggest part of the answer to your quantum leap lies in the small, dark corner of the process.  

  16. Michele: Aren’t Michael and RST grand? Thanks for reminding me about defocusing. I will ponder on that for awhile.

  17. Thayu says:

    Why don’t we try to take some pointers from some other industry? From some industry where people find almost no defects. What helps them achieve that? I am an electronics engineer, and I have seen that almost all major electronics design revolves around testing. If you can’t test it, you don’t design it. As simple as that. Maybe we need a revolutionary change not just here in Microsoft, but in the field of software itself. I am fresh out of college and lack experience in both fields, but I can’t help feeling we are missing something. Every field has to have quality control and assurance, and that is nothing but testing in some form or another… Why don’t we learn from them? Any suggestions?

  18. Thayu: Learning from other industries is a grand idea. This is often a useful way to get past a block you are encountering.  Do you know of an industry which has near-zero defects?

  19. Thayu says:

    Well, there are a couple of issues that have to be looked at first over here. The first is that software is unique, in that it is different from most fields. Second is that, maybe instead of using that uniqueness to spur us on to perfection, we are using it as slack to take it easy, or focus on other areas. What is this difference? Simply this:

    The software domain is unique in that it is the only field that doesn’t face the inevitable problem of “wear and tear”. Software doesn’t fail simply because of aging. Data loss may occur in storage, but there is no degradation of software quality as such. Also, we don’t have to worry about damage during production. The CDs we ship our software on may be damaged, but that is not our problem either. We don’t have to worry about replacing damaged software in the sense that electronic goods have to be replaced on damage. We simply have to send another copy of the software. There is no extra cost in that, unless you count the cost of the CDs – a negligible cost.

    I believe it is because we do not have enough of an incentive to strive for perfection. We can release software patches for bugs, and in the rare event of a recall, the losses aren’t too huge. Take the hardware industry on the other hand. If there was a fault in the circuit design, the entire product is useless since they can’t release “patches” to correct them. And as manufacturing is extremely expensive, the losses due to a small flaw could be in the billions. Which means they can’t afford even a small error. And that is the reason why hardware design has evolved around testing, and not the other way around. The same applies to almost all other fields. Take the satellite field for example. A small flaw means tens of billions of dollars down the drain. Almost nothing can be salvaged. And they have to make sure that if it crashes, it does so without harming humans or creating political troubles. This has led to testing being extremely important. And you wouldn’t believe the amount of time they spend in calculating the number of standbys to be used. Because if they use too few standbys, the entire operation might be a failure. Use too many, and they’re wasting cash in the form of fuel to launch it up in space. Stress testing especially is taken to extremes that the satellite might never face. And then when that is over, they can’t use the same piece since it would have been weakened by the tests. So they manufacture another matching piece. We wouldn’t have to even think of this in software – we can make infinite copies. An example to illustrate how rigorous testing is with satellites would be the case of one of NASA’s satellites. After a long time of testing, they finally certified a communication satellite to be good enough to stay in orbit for five years. After more than ten years, it was still functioning perfectly, and that is great, because component-aging and probability of failure increases exponentially with time! I’m not sure if the satellite is still up there, since this data is taken from a slightly old book. And this was for a communication satellite – just machinery. If humans were going to be in there, it would be a lot more rigorous I guess. We needn’t go to that extreme, but we can improve our testing, design and coding too. How, is the question. Aren’t we doing our best? Aren’t we the best brains in the industry? We are Microsoft, after all… So if the fault doesn’t lie with the people, it is the technique. What should we change? How can we design based on testing, and not test based on design? Or is there something else we should look at? I think the time is fast approaching when we ought to hold brainstorming sessions regularly to address this issue.

    Why aren’t we perfect? Is it just a lack of necessity? As evinced by bug bars and triages, we are allowing bugs to slip in. We only remove the ones that have to necessarily be removed. Maybe if it is a companywide necessity that there are fewer bugs than presently accepted, some of us will find some amazing solutions! 😉

    Also, we needn’t look at an industry that has near-zero defects. As just mentioned above, we can ignore wear and tear problems. We can ignore aging. What do we see when we remove those? We come face to face with the painful possibility that maybe we just don’t have enough incentive to go the distance! There are differences in methodology of course, and we can analyze them, but till now most of us wouldn’t have thought this could be simply because we weren’t pushed far enough, hard enough! Painful thought – and maybe true.

    P.S: Please do correct me if I’m wrong or am saying something stupid anywhere, because as I said, I’ve just finished college and haven’t been in Microsoft for even a month!

  20. Thayu: I agree that one reason most software is rife with defects is that software makers do not have sufficient incentive to do otherwise. Thanks for the thoughts!

  21. thayu says:

    Talking about the hardware industry, let us see how electronic design is different from software design. Let us take Ted, a Transistor level designer, Gary, a Gate level designer and Cody, a Component level designer. Ted puts transistors together and decides their arrangement on an IC to reduce space consumption, has to worry about the wiring to reduce capacitance and delays, and so many more factors, to design even a basic gate. So Ted takes his time and does whatever he has to do and gets a beautiful AND gate which functions perfectly and is quite small. Gary, who is a gate level designer, tries to put together logic gates in the best way possible to achieve basic functional blocks like a shift-register or adder and so on. Gary has to decide whether he is going to use just NAND gates throughout to facilitate manufacture, or whether he is going to use a mixture of gates to improve efficiency, and so on. Cody, the component level designer, will use shift-registers, counters, timers, adders and so on to realize some complex circuitry like the control for traffic signals. What is the pattern we see here? The efficiency that Ted can achieve in a complex circuit such as the traffic light control is much higher than what Gary can obtain, which is in turn higher than the efficiency Cody can achieve. Why then isn’t the industry infested with Teds? The answer is that Ted will take much longer than Gary, who will in turn take longer than Cody to finish the job. So it is the Cody’s of the industry who get the work. Where do Ted and Gary go? They go to where efficiency is imperative. Ted would be perfectly at home in Intel, where their processors have to be small and efficient. If that job was given to Cody, we’d be back to the days of computers taking up an entire building. Gary would go to consumer electronics companies which have to produce efficient products, but produce them quickly too. The Cody’s can be found in consumer electronics companies too, but ones that have to be nimble and worry more about being able to adapt to a rapidly-changing market which might not wait for them to come up with efficient products. So how do they individually ensure quality? I mean, Cody knows nothing about electromagnetic effects at the transistor-level, so he can’t even design a decent gate, let alone complex circuitry. Gary can design using gates, but he can’t design them. What happens? As you probably know, Gary doesn’t have to design gates because they are readily available. And Cody doesn’t have to design adders and so on, because they are readily available as well. And as they know that the gates or adders they buy are perfect, these guys can go ahead and design using them with confidence. As long as their design is right, they know that the end product will be so too. This is a case of abstraction at its best. How can we use this in the software field? How can we build on perfect code to write more code that is equally perfect? By employing the use of libraries? That is a good idea, as long as all entries to the libraries are perfect, i.e. bug-free. And we need to have multiple levels of libraries, so that the necessary level of abstraction is available to the coders. But this is more or less what we are already doing! Why then do we still have bugs? What do they do different in hardware? Let us take another look at what Ted, Gary and Cody are doing. Ted takes transistors and defines connections between them. Transistor design keeps changing due to constant research and Ted doesn’t have to worry about that because that is abstracted. Manufacture will handle their implementation. So Ted just decides which transistors to use and how they are connected. Since each transistor is a well-defined entity, he just handles the selection and more importantly, their arrangement and wiring. What about Gary? Again, he worries only about the selection of gates and wiring/interconnection. Cody? No difference there either – selection of functional components and wiring again. So now, what does wiring represent? The existence of the most basic interface. So in software terms, these guys just pick the functions and interfaces between them. And what do they test? The perfect functioning of the interfaces between two functions. And coupled with pre-existing data from the components, they can easily say how much current each circuit can take and so on by just taking the weakest component, i.e. if one of the gates will break down when it’s input voltage exceeds 3V, by just looking at the circuit, they can say what the maximum input can be. Thus, the limitations are decided by identifying the weakest component, or the weakest link. Why doesn’t something like that exist in software? We can see that we repeatedly use certain functions over and over again. And yes, they have been collected in libraries. Why then isn’t our code just a collection of function calls? Why aren’t we just writing automation to test all interfaces/interactions between various functions? Any answers? I’m just curious. That there would be too many interfaces to test can’t be the answer, because it would just mean we haven’t abstracted enough. We know how much wiring exists in even the most basic IC. If they can design despite that, I believe we can design software that is just as complex… And no, saying that even a declaration is like a function call and so, all our code is just a collection of function calls already isn’t the answer. It would mean we should’ve been able to test them all then. And as I said, saying there would be too much to test just means we haven’t abstracted enough – and that we aren’t building enough on past successes (read "perfect code") to build more.

    I think the reason is probably that we are more interested in coming up with more features and more capabilities than with perfecting what we already have. When JAVA was first introduced, it’s amazing collection of built-in functions probably spurred people to explore more features than to write more perfect code with just slightly improved functionality. And that must have been only because the market responded to the features rather than to perfection. So maybe perfection isn’t necessary as of now, and maybe it never will be. Looking at it this way makes me feel that when the time comes, the quantum leap might be simply a drastic decrease in the amount of "cool" new features being added, and the time saved being spent in improving our libraries or our languages, leading to better code. Maybe, and maybe not.

    I am not wholly satisfied with this answer and think there must be something more. And I also think how ever many theories we come up with, we can never know for sure until we implement some of them and see what happens; but we need some theories first, and that was just one that I could think of. Let us look for more!

  22. jib says:

    Once I was also like Thayu 🙂

    Nothing wrong with it, and some consideration points:

    Spaceships (and derivatives 🙂 also are buggy :). My favorite story: http://en.wikipedia.org/wiki/Ariane_5_Flight_501

    I still believe, that future software should ALLOW bugs, but still MUST be functional. It’s nature of nature (sorry:) – nothing is prefect, but still everything works and looks VERY GOOD.  Another favorite on this topic VERY FUNDAMENTAL NATURE LAW, which changed a lot in ourlife): http://en.wikipedia.org/wiki/Quantum_indeterminacy

    Making long story short – you can’t simultaneously and precisely tell speed (energy)  and position of a quantum particle. Precise can only one of those parameters not both.

    As opposite to Thay I want to tell – we (and our software) shouldn’t bet perfect. Nobody is. Don’t get me wrong – our software should be perfect on the task, including UI, reliability, gotchas etc. But internally it may be buggy (let’s call it "imperfect"). All industries producing physical items have defects. Especially electronics. To remind those "Yield rate" – all the chips are tested and rated: bad, 2Ghz, 2.2 GHz, 2.3 GHz. Difference is that chips are very reliable indeed, if they passed are –   Only inThere is no industry with zero defects. Also a product can be labeled "near zero defects" only if it has quite limited on/of type functionality which ALL can be verified (like mentioned computer chips), any other product have "defect tolerance" – and still providing value.

    Also I like this: http://en.wikipedia.org/wiki/Analog_computer#Timeline_of_analog_computers

    It may bring "other industries is a grand idea"  too when investigated deep enough 🙂

    It’s amazing how unstable things create very stable systems.

  23. Pete. says:

    Michael: Thanks for the link to AsmL.  I’ve put it on my reading list.

    Thayu: "The software domain is unique in that it is the only field that doesn’t face the inevitable problem of “wear and tear”. "

    I respectfully disagree.  Software quality naturally degrades over time in a number of ways:

    1) Software often runs on newly created hardware that it cannot possibly have been tested with.

    2) Software often depends on other software and/or services to operate, and those often change over time.

    3) Customers often find new ways to use or abuse (e.g. hackers) software over time.

    4) Protocols, rules, laws, needs, and expectations change over time.

    5) Basically, the "things" that define the quality of software are in constant flux.

    We would do better to think of software as a living entity that needs to be serviced regularly in order to maintain its integrity and quality.

    Software is a service, and our thoughts around quality should reflect that.


  24. thayu says:

    Michael: Thanks for the thoughts Michael. Yeah, I do understand perfection is not possible; and there have been bigger failures in the satellite domain which have been hushed up than the Ariane 5-501. But we don’t hear of them too often, do we? You saw the flak they received! That was enough for them to get the subsequent models working just fine… 😉 And yes, I realize that every field has bugs – what I meant was, they seem to push "bugs" or whatever of that nature under the convenient rug of "specifications". As you said, they do rate the ICs based on their limits. I understand perfection in the absolute sense is impossible, but hey, isn’t everything relative? And I do know that some hardware we use is extremely buggy. My final year hardware project which seemed to have bugs everywhere we looked, is being used in the defence industry now! How? We just pushed them from the column of limitations to the column of specifications! 😉 I say we should still strive for perfection, so that even when we do fall short (as we definitely will) we are at least way ahead of the pack! And yeah, the point is not having "near-zero defects", it is about having "near-zero complaints about defects"! Sorry if I seemed to have gotten carried away. Thanks michael for bringing me back to earth! 😉

    Pete: Thanks Pete, what you say are issues for us to contend with that I hadn’t given much thought to. But then again, they are issues only because of the durability of software. It is only because our software stays around for ages that it is subjected to these stresses due to change. So I prefer to look at this as our software being put in a totally new environment and being expected to survive or function properly – kinda like throwing an eagle in the water and expecting it to swim like a dolphin! The eagle remains the same – it is the water that kills it. There is nothing wrong with our software as such – it is just that they are not designed for all environments. And they shouldn’t be! If they function perfectly in the environments they are intended for, that should be sufficient.

    Also, after a little thought, I realize that by far, we face the almost unique problem of hackers. Almost anyone with access to the internet has access to data on how to hack/crack, whereas such technical know-how isn’t exactly common in other fields like hardware. And to summarize, it is perfectly alright to have bugs as long as they aren’t found – or, I quote "Do what you want son, so long as no one finds out. Problem is, someone usually finds out. So ensure it can be found out only when it can make no difference to you"… In short, no one cares as much about bugs in Windows 95 now, do they? It’s lifetime is almost over, and all the changes Pete mentioned might not make much difference to it simply because no one uses it! 😉 So we’ll just have to put in bugs that nobody can find for ten-twenty years. Kinda like encryption – anything can be broken given time, but hey, what’s the point of breaking a message (or software) when it’s lifetime is over? Maybe that is what we have to focus on?

Skip to main content