In the previous instalments of our seminar series we have explored how bayesian methods can be used to estimate the epistemic uncertainty of a model, i.e. the uncertainty coming from the choice of network’s parameters. In particular, two weeks ago we have seen how variational inference applied to Bernoulli and Gaussian dropout can make Bayesian NNs tractable under certain assumptions. This week’s seminar will focus on a few numerical experiments that highlight benefits and limitations of this approach. We will start by giving a brief summary of the main mathematical results that we have seen so far, then discuss the difference between dropout and dropconnect and end with an analysis of their robustness to noise when applied to CNNs trained on the MNIST and Cifar-10. Throughout the talk we will use the paper “Robustly representing uncertainty through sampling in deep neural networks” as our main reference.
Robustly representing uncertainty through sampling in deep neural networks
References
[Mcc18R]
Robustly representing uncertainty in deep neural networks through sampling,
[Gal16D]
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning,
[Mob21D]
DropConnect is effective in modeling uncertainty of Bayesian deep networks,