Speaking in ANUG


Aarhus .NET User Group has been so kind to invite me to come and give a session on June 25th, 2008, and I’ve elected to speak about TDD and Installers, a subject that regular readers of this blog would correctly surmise is dear to my heart.

Read more about the event here (in Danish, as will be the session itself).

If you are in Århus that day, I hope you will consider coming by and saying hello as well.

Comments (10)

  1. Hi Martin

    I’ve been to almost all ANUG’s meetings. But without question this is the one I’ve been looking most forward to.

    I’m not a TDD believer. I have tried for many years, but I’m not at all convinced that TDD or even writing UnitTests is the right way to develop applications.

    I know you disagree, and that is why this might be the best ANUG session ever.

    I do write tests. But its the integration tests that helps me the most. And I almost never use mock objects. I’ve used mock objects when integrating with external applications/services (e.g. when integrating with a payment provider like the danish DIBS). But to be honest, even here I would prefer real integration tests, if the external system can handle it.

    I would never mock out the database. Instead I design my database test, so that they are fast and reliable. E.g. I create an empty database (CREATE DATABASE) every time I run a test suite. For every test I Truncate all tables, and inserts enough rows in each table to have a good test setup.

    I know you don’t do this because it is slow. But a quick measure shows me that the first test takes about 1-2 second to create the database and schema. After that it takes > 0.2 seconds per test to truncate the table.

    So if it is not slow, why would I want to write so much code just to avoid this?

    By writing intergration test I know that my system works all the way down to the database. And I know that the my data is read and written correct to the database.

    I’m not a database guy (if you ask me). But my experience tells me that when ever I find an error in my application. It is either on the integration points, or in the UI. Almost never in the Domain.

    We all know the UI is hard to tests. But the database is very easy to test. So why is it that you want to mock it?

    When you mock your database, and other integration points, you have to introduce a lot of complexity to your code. Often I see people starting to "program to an interface", use virtual methods etc.

    This leads to a lot more code you have to write and maintain. It makes it more difficult for new team members. But worst of all, it makes it hard to "change" your application. I’m not talking about refactoring, but I’m talking about "changing" your application.

    If you have mock objects, interfaces etc. there is so much more code to change, and it will slow you down. Also it requires a lot of discipline, to change the both the System under test and the test themselves, that it gets hard to "Embrace change".

    Also my experience is that very few people know how to write good Unit Tests. The test I’ve seen are often very hard to understand. And it is very hard for newbies to learn how to write good tests.

    I’m not sure where I’m going with all this, except that I would love to hear from a "true believer" why TDD, Mock objects etc. are great.

    Personally I think that TDD will never become mainstream. So I hope that you will be honest and not just expect everyone to agree that tests are good.

    : Thomas

  2. ploeh says:

    Hi Thomas

    Thank you for your comment – I think you managed to beat the word count of my post by several hundred percent 🙂

    First of all, let me state that I believe a lot of what you write is true. There are other parts in which I disagree, but that hardly comes as a surprise.

    I hope you are not going to be disappointed by the session, as I don’t intend it to be a general session on TDD, and neither a session that attempts to convert anybody. Still, it’s my ambition that it can provide food for thought.

    In general, I prefer to separate application layers for a number of reasons, as I describe here: http://blogs.msdn.com/ploeh/archive/2007/05/30/ReasonsForIsolation.aspx

    I never create an abstraction and set up Test Doubles just for the fun of it, or because TDD dictates that I do so (which it doesn’t).

    Why would I ever want to abstract away the database layer? As you can read in the link above, there are several reasons, but from a TDD perspective, there’s the additional issue of speed.

    As you can read here (http://blogs.msdn.com/ploeh/archive/2008/01/31/DataAccessComponentTestingRedux.aspx) I use much the same approach as you do when testing my data access components, but I still prefer limiting the amount of tests that hit the database as much as possible.

    200 ms per test may sound fast, but it’s way too slow for my test. Five of these tests, and the test suite takes a second to execute. Fifty, and it takes 10 seconds.

    In a real software development project, you can easily have several thousand unit tests. If they all take 200 ms, it can easily take up to 10 minutes to execute a complete test suite.

    In TDD, I follow the Red/Green/Refactor cycle, which means that I typically run my test suite several times per minute. That’s not possible if it takes 10 minutes to execute the suite.

    I know I can’t convince you that TDD is a superior development methodology just with this post, but given the above, it’s no wonder you’re not convinced that TDD works; given the circumstances (10 minute test runs) it wouldn’t work for me either 😉

    This is one of the many reasons I prefer using test doubles whenever I can: It speeds up the majority of the tests by orders of magnitudes.

    Even if you don’t believe in TDD, I will still recommend the book ‘xUnit Test Patterns’ by Gerard Meszaros – it covers a lot of these subjects, including the benefits and disadvantages of unit tests, integration tests, subcutaneous tests, etc.

    Looking forward to meet you in Århus 🙂

  3. Hi Mark

    Thanks for you answer. Still looking forward to your session.

    Just a small comment on the speed of integration test. My point with > 0.2 seconds, was that they are fast. The first test takes a a second or two extra, but the rest are almost as fast as normal unit tests. I can easily run test 2-100 tests in a second. I was not clear… my bad.

    With thousand of unit tests in a project, the compile time is the bottleneck. If you run your tests several times per minute on a project witch has several thousand test, don’t you spend all you time waiting for the compiler instead?

    : Thomas

    PS: I would love to here you comment on my postulate that many Unit test makes it hard to "change" your application. Not refactoring, but "changing" based on new requirements, where you need to change the tests as well.

  4. ploeh says:

    Hi Thomas

    Thank you for your comment.

    Being able to run your entire (relevant) test suite in a couple of seconds is what counts. If you can do this against a real database, that’s one less reason to being able to replace the data access layer with something else (e.g. a Test Double). There are still other good reasons to do that, but that’s beside the point.

    However, normally, any out-of-process communication (such as a database call) is orders of magnitudes slower than an in-process call, so if test speed becomes a problem, replacing these with in-process Test Doubles is a very effective and common way to increase test speed.

    I never had problems with compilation time. Could it be that you always compile with Code Analysis switched on? That’s going to slow you down a lot.

    If written poorly, unit tests can make change to the application harder, while the whole purpose was to make it easier. The anti-patterns Overspecified Test and Fragile Test describe this situation in more detail, as well as how to avoid this situation.

  5. Hi Mark.

    This was a superb presentation, It was one of the best ANUG meetings ever…. I would like to request a copy of the demo if possible?

  6. ploeh says:

    Hi Brian

    Thank you, that’s good to hear!

    You are welcome to a copy of the demo code, but to enable me to send it to you, I will need your email address. If you use the Email link almost at the top left on this blog page, I’ll get an email where I can reply to you.

    Not that I mind receiving a gushing comment on my blog as well 🙂

  7. Hi Mark,

    I’d like to thank you for taking the time to visit our humble user group and given a great talk. I thoroughly enjoyed the content and the fluent manner in which you delivered it.

    Hope to see you at another meeting sometime 🙂

    /Søren

  8. ploeh says:

    Hi Søren

    Thank you for giving me the opportunity to give this talk to a great audience.

    I’m very impressed at how you have managed to get ANUG up and running on a purely voluntary basis.

    I look forward to come back another time.

  9. Hi Mark

    I didn’t see you answer before now.

    Regarding Compile time, we try to keep the number of projects in the solution at a minimum. But it is hard 😉

    Regarding Code Analytic… yes it is always turned on. It takes a few extra seconds when compiling, but I rely like the Code Analytics. I’m afraid that we will forget to use it, if it is not turned on.

    PS: I rely enjoyed you session very much. This was the best prepared talk, I’ve see in a very long time, and I was very inspired. And you should be more than welcome to come back 😉

  10. ploeh says:

    Hi Thomas

    Thank you for your comments – I’m happy that you liked the session 🙂

    Keeping the number of projects in the entire application down becomes a moot point if you can achieve proper isolation between layers, as you will the always be able to create new solutions that only work on a subset of the entire code base. That’s one of the many reasons why achieving isolation between components is so desirable: http://blogs.msdn.com/ploeh/archive/2007/05/30/ReasonsForIsolation.aspx.

    TDD helps a lot in ensuring proper isolation, which is yet another good argument for TDD 😉

    Regarding Code Analysis, I like it very much too, but in my experience, it takes a lot of time to run on complex code. While I haven’t specifically measured the difference, my gut feeling is that it makes compilation about 10 times as slow as when it’s not turned on.

    Here’s what we are currently doing in the Mobile Server Team:

    Code Analysis is disabled in the Debug configuration. That allows us to write code/compile/run tests fast.

    In the Release configuration, Code Analysis is enabled and all warnings are treated as compiler errors.

    Each time our automated build runs, it compiles the code in the Release configuration. If there are ANY Code Analysis warnings then, it means that the build fails.

    Since no-one wants to break the build, everyone always remembers to do a Release build before checking in.

    This is working pretty well for us. Obviously, since ANY Code Analysis warning would result in a broken build, we need to constantly review that we don’t have too many suppressions, but that’s another story 🙂