In the previous instalments of our seminar series we have explored how bayesian methods can be used to estimate the epistemic uncertainty of a model, i.e. the uncertainty coming from the choice of network’s parameters. In particular, two weeks ago we have seen how variational inference applied to Bernoulli and Gaussian dropout can make Bayesian NNs tractable under certain assumptions. This week’s seminar will focus on a few numerical experiments that highlight benefits and limitations of this approach. We will start by giving a brief summary of the main mathematical results that we have seen so far, then discuss the difference between dropout and dropconnect and end with an analysis of their robustness to noise when applied to CNNs trained on the MNIST and Cifar-10. Throughout the talk we will use the paper “Robustly representing uncertainty through sampling in deep neural networks” as our main reference.
Robustly representing uncertainty in deep neural networks through sampling,
As deep neural networks (DNNs) are applied to increasingly challenging problems, they will need to be able to represent their own uncertainty. Modeling uncertainty is one of the key features of Bayesian methods. Using Bernoulli dropout with sampling at prediction time has recently been proposed as an efficient and well performing variational inference method for DNNs. However, sampling from other …
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning,
Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting …
DropConnect is effective in modeling uncertainty of Bayesian deep networks,
Deep neural networks (DNNs) have achieved state-of-the-art performance in many important domains, including medical diagnosis, security, and autonomous driving. In domains where safety is highly critical, an erroneous decision can result in serious consequences. While a perfect prediction accuracy is not always achievable, recent work on Bayesian deep networks shows that it is possible to know …