To join this seminar virtually: please click here.
Abstract: Ensemble methods have historically used either high-bias base learners (e.g. through boosting) or high-variance base learners (e.g. through bagging). Modern neural networks cannot be understood through this classic bias-variance tradeoff, yet "deep ensembles" are pervasive in safety-critical and high-uncertainty application domains. This talk will cover surprising and counterintuitive phenomena that emerge when ensembling overparameterized base models like neural networks. While deep ensembles improve generalization in a simple and cost-effective manner, their accuracy and robustness are often outperformed by single (but larger) models. Furthermore, discouraging diversity amongst component models often improves the ensemble's predictive performance, counter to classic intuitions underpinning bagging and feature subsetting techniques. I will connect these empirical findings with new theoretical characterizations of overparameterized ensembles, and I will conclude with implications for uncertainty quantification, robustness, and decision making.
Event Type
Location
ICCS X836 / Zoom
Speaker
Geoff Pleiss, UBC Statistics Assistant Professor
Event date time
-