Noise injection methods such as dropout are popular ways of implicitly capturing epistemic uncertainty in neural networks. Usually, noise injection is applied during training and inference. In the training phase, the networks learns to reduce the variance that is introduced by noise injection within the data distribution. Running inference several times on the same input with noise injection enabled makes it possible to estimate the remaining uncertainty, which will be mostly epistemic. Applying noise injection during inference, however, increases the computational costs and can be too slow for real-time applications.
In this talk we will discuss how to use approximate variance propagation as a faster and deterministic way of capturing the variance that is introduced by noise injection during inference.