I'm continuing the reading of the book "Boosting", on the method of machine learning that boosts the quality of the underlying algorithm by combining multiple differently trained copies of it. I re-formulated AdaBoost in a form that is more understandable to me, more like a program and less like a mathematical formula, and it had turned out that the workings of the model produced by AdaBoost are the same as in the Bayesian models. I wrote it up in my other blog, but I want to share the link here too:
To understand the transformation, you'd probably want to read the previous installments too, linked from the post. And I've used the version of Bayesian logic that works with weights rather than probabilities, the description of which is also linked.