My interest in Bayesian inference comes from my dissatisfaction with 'classical' statistics. Whenever I want to know something, for example the probability that an unknown parameter is between two values, 'classical' statistics seems to answer a different and more convoluted question.
Try asking someone what "the 95% confidence interval for X is (x1, x2)" means. Very likely he will tell you that it means that there is a 95% probability that X lies between x1 and x2. That is not the case in classical statistics. It is the case in Bayesian statistics. Also all the funny business of defining a Null hypothesis for the sake of proving its falseness always made my head spin. You don't need any of that in Bayesian statistics. More recently, my discovery that statistical significance is an harmful concept, instead of the bedrock of knowledge I always thought it to be, shook my confidence in 'classical' statistics even more.
Admittedly, I'm not that smart. If I have an hard time getting an intuitive understanding of something, it tends to go away from my mind after a couple of days I've learned it. This happens all the time with 'classical' statistics. I feel like I have learned the thing ten times, because I continuously forget it. This doesn't happen with Bayesian statistics. It just makes intuitive sense.
At this point you might be wandering what 'classical' statistics is. I use the term classical, but I really shouldn't. Classical statistics is normally just called 'statistics' and it is all you learn if you pick up whatever book on the topic (for example the otherwise excellent "Introduction to the Practice of Statistics"). Bayesian statistics is just a footnote in such books. This is a shame.
Bayesian statistics provides a much clearer and elegant framework for understanding the process of inferring knowledge from data. The underlying question that it answers is: "If I hold an opinion about something and I receive additional data on it, how should I rationally change my opinion?". This question of how to update your knowledge is at the very foundation of human learning and progress in general (for example the scientific method is based on it). We better be sure that the way we answer it is sound.
You might wander how it is possible to go against something that is so widely accepted and taught everywhere as 'classical' statistics is. Well, very many things that most people believe are wrong. I always like to cite old Ben on this: "The fact that other people agree or disagree with you makes you neither right nor wrong. You will be right if your facts and your reasoning are correct.". This little rule always served me well.
In this series of posts I will give examples of Bayesian statistics in F#. I am not a statistician, which makes me part of the very dangerous category of 'people who are not statisticians but talk about statistics". To try to mitigate the problem I enlisted the help of Ralf Herbrich, who is a statistician and can catch my most blatant errors. Obviously I'll manage to hide my errors so cleverly that not even Ralf would spot them. In which case the fault is just mine.
In the next post we'll look at some F# code to model the Bayesian inference process.