Years ago I had an employee, let's call him Vanya (not the real name). He was struggling a bit so I was watching his work closely. Every week we discussed what he needed to get done the next week and what he had done the previous week. I kept a list of the work items he needed to complete and checked them off when he was done with each. For one particular work item which was testing a particular DirectShow filter, the item on the list was writing tests for it. One week he worked on and completed this work. A few months later we became aware of an issue that would fundamentally cause the filter to not work. In fact, it had probably never worked. Why, I wondered, didn't Vanya's tests catch it? I went to speak with him. It turns out, he had written the tests. They had compiled. He had not, however, ever actually run them. They were "done" in his mind, but not in mine. Oh, and what he had written didn't actually work. Shocking, I know.
I tell you this story to introduce a problem I've run into many times on many different scales. This story is probably the most aggregious, but it is certainly not isolated. The problem stems from the fact that we rarely define the word. It is assumed that everyone shares a definition but it is rarely true. Is a feature done when it compiles? When it is checked in? When it can run successfully? When it shows up in a particular build? All of these are possible interpretations of the same phrase.
It is important to have a shared idea of where the finishing line is. Without that, some will claim victory and others defeat even when talking about the same events. It is not enough to have a shared vision of the product, it is also necessary to agree on the specifics of completion. To establish a shared definition of done, it is necessary to talk about it. Flush the latent assumptions out into the open. Before starting on a project, it is imperative to have a conversation about what it means to be done. Define in strict terms what completion looks like so that everyone will have a shared vision.
For large projects, this shared vision of done can be exit critera. "We will fix all priority 1 and 2 bugs, survive this many hours of stress, etc." For small projects or individual features in a large project, less extensive criteria is needed, but it is still important to agree on what will be the state on what dates.
While not strictly necessary, it is also wise to define objective tests for done-ness. For instance, when working on new features, I define "done" as working in the primary scenarios. Bugs in corner cases are acceptable, but if a feature can't be exercized in the main way it was intended to be, it can't be tested and isn't complete. To ensure that this criteria is met, I often insist on seeing the feature demonstrated. This is a bright line. Either the feature can be seen working, or it cannot. If it can't, it isn't done and more work is needed before moving on to the next feature.