About this Event
The IACS seminar series is free and open to the public but registration is required.
Talk Abstract: While deep learning has demonstrable success on many tasks, the point estimates provided by standard deep models can lead to overfitting and provide no uncertainty quantification on predictions. However, when models are applied to critical domains such as autonomous driving, precision health care, or criminal justice, reliable measurements of a model's predictive uncertainty may be as crucial as correctness of its predictions. At the same time, increasing attention in recent literature is being paid to separating sources of predictive uncertainty, with the goal of separating types of uncertainties reducible through additional data collection from those that represent stochasticity inherent in the data generation process. In this talk, Dr. Pan will examine a number of deep (Bayesian) models that promise to capture complex forms for predictive uncertainties. She will also examine metrics commonly used to such uncertainties. Her aim is to highlight strengths and limitations of the models as well as the metrics; she will discuss potential ways to improve both in meaningful ways for downstream tasks.