Why is Boosting with Decision Stumps so OP? (Slides)

less than 1 minute read

As part of a machine learning paper reading group I was grateful to be a part of at CWRU, I was encouraged by the reading group’s advisor to cover the paper “Explaining the Success of AdaBoost and Random Forests as Interpolating Classifiers” by Wyner et. al (2017). While I’m not entirely sure that 100% of the weirdly good generalization behavior of AdaBoost is completely explained away by the findings of the paper, the idea of a “spiked” decision boundary is fairly satisfying to this end.

Updated: