Building trust is precisely what I look for when I think about adaptive development in general. We, as creators and as consumers of software, need better levels of trust at many levels. The directors of the European Space Agency surely wanted a better level of trust in their software after the Ariane 5 explosion (caused by a software design error). I want a better level of trust in the WCF service I am designing and programming right now, which will be enterprise-level deployed and to be called by dozens of enterprise applications.
How do I know I am not fooling myself, and my project team, into believing that my effort estimate is right?
How do I know I am not fooling myself, and my project team, into believing that my WCF service is ready for deployment?
For better levels of trust, of course, the first principle is, as said by Richard P. Feynman in regard to a trait of scientific thought, do not fool yourself, neither your team, neither your customer. The methods of science have built-in ways to not fool ourselves —from all those useful combinations of different schools of thought known as rationalism, empiricism, and skepticism.
As of right now, for example, an increasing number of automated tests passing at each build of my set of WCF services amounts for an increasing level of trust. But those tests are not all. There is a number of feedback loops from an increasing set of client applications and, ultimately, end-users which also helps to not fool ourselves into believing that “we have the belief that we are on the right track” or “we want to believe…”. If we have that belief or not is not important, what is relevant is what we can prove based on a number of kinds of evidence, diverse, wide, and non-trivial kinds of evidence.
Should I trust my teenage daughter that she is going to do her duties? Is there any evidence or pattern of behavior for that? Then I have my answer. The words in themselves are not evidence. It is not a matter of trust, it is ultimately a matter of evidence.
The historic record to deliver good software on time and on budget, by a particular project team, could be good evidence on which to base my trust. The plans and good intentions are not evidence in themselves.
So, a measure of trust in X is going to be based on measures of actual evidence for X —for any relevant value of X.
If there are only words, plans and good intentions without an historic record backing up those words, plans and good intentions then there is little trust to measure. In that case a plausible action is to start building that historic record for particular project teams.