Do you use a debugger when you develop?

I posted a earlier blog about wanting support for managed co-routines in the next version of VS. Interestingly enough, that didn't garner much interest. Instead, what seemed to pique people was the fact that i really don't use a debugger when developing managed code. It wasn't a boast, just something I'm observed. The single highest contributing factor is probably that i develop in a completely different way between managed/unamanaged code. Because unmanaged code (C++) is so high maintenance I tend to code in way that allows me to implement the logic i want as quickly as possible with unnecessary overhead. High maintenance in this sense refers not to conceptual complexity, but purely from a coding complexity stand point. The need to maintain header/implementation files. The enormous amount of error handling and keeping track of memory that I have to do in unmanaged code leads me to normally create large methods with a lot of logic in them that i end up needing to debug to understand.

Refactoring that code into something simpler and more maintainable is extremely hard to do because you can't just extract methods (or any other number of refactorings) because in general you end up having to do a lot of extra effort yourself to handle a lot of the nasty stuff that the runtime does for you autimatically now.

With managed code I tend to write smaller, simpler objects with simple messages that are easy to understand and verify. Tracking a bug down is usually quite simple because you'll either have an exception, or you can pretty easily figure out where the error occurred. Fixing the bug is just a matter of asking: "ok, who could have called this with the wrong values", sitting and mulling on it, and then, in a gestalt flash realizing the issue, fixing it, rerunning and having everything work fine.

I've also noticed this when coding in a functional language. Except the difference is much more marked in that case. Generally if the code compiles then it's correct. And if it's not it's very very obviously not. i.e. on the first input to the system it will be clear the whole thing is borked.

Maybe I do actually use a debugger on my code. Except that the debugger is more a function of a good compiler/runtime/unit tests than an actual tool that i fire up. What does the debugger get me? A callstack... which is probably not useful because my methods shouldn't really care about who called them. Values of variables... which is usually not to useful because I should understand what the values could be through the use of immutable objects which validate at construction time and method that validate upon entry.

Is it just me, or are there others out there who have found that they use a debugger less and less (or not at all)?

Comments (19)

  1. Ilya says:

    Presumably there are plenty of others. I do not use debugger myself when develop – a compiler and a printf do just fine.

  2. Mark Allan says:

    Personally, I find that the best way to find bugs is to read through the relevant code first, which in 90% of cases will causes a cry of ‘Doh!’ followed by a small burst of frantic typing. If that fails, a bit of tracing usually does the trick. After that, I crank up the debugger.

    On the other hand, I would say that for most people I know the first instinct is still to jump in and single-step.

    The difference seems to be less to do with managed/unmanaged code and more to do with experience – after you’ve been coding for a while (particularly with multi-threaded apps), it just seems easier and quicker to do it in your head.

  3. David James says:

    Since I’ve been doing test-driven development, i.e. writing comprehensive unit tests, I have rarely needed to use a debugger against my own code.

    If another developer in our team asks me to help them with code that I’m not familiar with, I find a debugger useful.

    We set a breakpoint, discuss the line of code, single-step, discuss the next line, etc. It forces my colleague to express in words their understanding of the code, it forces me to understand it before I step to the next line, and after stepping into half a dozen methods, I usually get a good feeling for the code.

    We have about 2 million lines of code, with 50 developers changing it every day, so there is always new and unfamiliar code in the system. For learning unfamiliar code quickly, the debugger is useful. For fixing bugs in code I know well, it’s not so useful.

  4. I always single-step through all new code I write.. Gets me most of the benefits of a _code_ inspection with a minimal time investment.

    This is a well-known technique. It slows you down enough to ensure that you see what you have written, instead of what you think you have written.

    Similar techniques have been used by people in other fields. People proofreading texts, for example, read each word in reverse, two letters by two, thus avoiding brain’s tendency to auto-correct misspelled words.

  5. AndrewSeven says:

    It has been my consistent impression that I use the debugger much less than those around me, but that when I do use it I am much more profficient in its use.

    When I am doing TDD, I almost completely stop using the debugger.

  6. Jeff Perrin says:

    I actually almost completely forgot about the VS debugger for a few months. Most of my errors seemed to be fairly simple to fix… As soon as the exception happened I’d have a really good idea on what was going on.

    Recently, I was working on an aspx page that had some fairly finnicky logic that I was having a hard time nailing down. My co-worker fired up the debugger and started stepping through the code… I was like, *damn*, I can’t believe I forgot about the *debugger*! We had the issues fixed in no time.

  7. I’m in the same camp with the TDD guys. I use the debugger much less often these days because of Test first development and liberal use of log4net since my test failures typically point out where any bugs are.

    I also occasionally fire up NCover to show me where I don’t have test coverage to give me a sense of where I should spend more time manually stepping through the code.

  8. Dave Vespa says:

    Developing using the debugger

  9. Radu Grigore says:

    I use the debugger only in exceptional situations: like when I get a segfault 🙂 I use the debugger to find out where it happened and then I close it. Assertion/exceptions are a much better way to ensure correctness since they also act as code documentation. Logging helps too. And testing is a must: "If you didn’t test it then it doesn’t work" (Bjarne Stroutroup, I think).

    Now, one question: what does functional / non-functional has to do with how many bugs are catched by the compiler? The type system is probably the most important factor here.

  10. I can’t believe I’m in the minority who think that you are just talking crazy talk there, man! 🙂

    Yes, I use unit testing. Yes, I write object oriented code. Yes, I use stack trace for exceptions (even though I am one of the few I know who do).

    I still need the debugger, though.

    If I get some unexpected exception, I want to know what caused it in the code and try to re-create the scenario before I write a unit test for it. Maybe it’s not something that was caused by my code, but someone elses? Using the debugger is much easier.

    I usually use the debugger to see what’s in an object at a certain point in time to make sure it meets my expectations.

    I’ve been writing some extensibility code in the past few days and the fact of the matter is that there’s not enough documentation out for it, so I go: Hmm, what do I get if I try to get this property? Is this the right method for this? etc.

    This is yet another thing I use the debugger for.

  11. c00ldude says:

    In C# and ASP.NET, when it fails from the compiler, the stack trace is usually enough to pinpoint the problem and refactor. Runtime exceptions which are uncaught can be traced from cordbg, which seems to be automatically invoked from the clr. Runtime anomolies (features, not bugs!) which don’t throw exceptions show up during testing and require serious tinkering. The .NET Reflector is a valuable tool for this.

    I dunno, this seems to count as programming technique evolves, and the development environment gets more sophisticated, someone will invent a new terminology to describe what we used to know as #if dgb assert…

  12. Radu: Good point. I think i meant "a strongly typed functional language with static type checking". As in, at compile time all type problems are identified.

  13. James says:

    On the contrary, I find myself in the debugger more often than in C++, but suspect it’s a function of experience.

    In native C++, I write a whole lot of code, run it through a whole lot of tests that I’ve also written, and fix it if necessary. I rarely use the debugger until something comes back from QA (fortunately not that often, but inversely proportional to the number of unit tests I have). I know the language and libraries so well that they rarely surprise me.

    In C#, I write small amounts of code, run it because it’s quick to do so, then I run it again under the debugger to find out where the exception was thrown. I curse, read another chunk of the documentation, fix it, and continue. The library is vast, intimidating and often surprises me (the language is … not).

    The main problem is that the documentation is too busy giving a wealth of examples, but not being concise about anything. It’s hard to pick up preconditions for calling a function or the invariants of the class, and even harder to select which overload to use from the plethora of alternatives. Basically, the framework became too rich in an attempt to make it easy to use and the documentation is verbose but not coherent or precise. The result is that I feel forced into experimental programming, backed up by the debugger.

    I feel dirty.

  14. James: Could you provide an example along with this. I’d like to send this feedback along. Thanks!

Skip to main content