Title: Quantifying Model Uncertainty in Neural Networks
Abstract: In this paper, we study how to obtain uncertainty bounds for neural networks (NNs) that capture model uncertainty (also known as epistemic uncertainty). We present NOMU --neural optimization-based model uncertainty-- an algorithmic approach based on a specific network architecture and a loss function. The loss function we propose is motivated by desiderata on ''good'' uncertainty bounds. Our algorithm ensures that the obtained uncertainty bounds are themselves representable as NNs, thus can be exactly optimized via mixed integer programs. This makes it particularly relevant in settings, where one wants to use the uncertainty bounds of the trained NNs in a subsequent Bayesian optimization, active learning or reinforcement learning task. We present an extensive experimental evaluation of our approach in low dimensional regression settings via simulations and benchmark it against several state-of-the-art methods.