MIT-Harvard Communications Information Networks Circuits and Signals (CINCS) / Hamilton Institute Seminar
Wednesday, September 30, 2020 11am
About this Event
When Does Deep Learning Succeed (and Fail)? Robustness, Interpretability and Fairness in Deep Learning
Link to Event Website for Information
Abstract: In the last couple of years, a lot of progress has been made in understanding various fundamental aspects of deep models. A key question is how to measure success in deep learning. A classical answer to this question is to evaluate the performance of trained models in the test set. However, it has been shown that this measure, although important, does not tell the whole story: models with an impressive test set accuracy can be extremely fragile against natural or adversarial noise, can catastrophically suffer from poor interpretability or can produce biased and unfair outcomes. In this talk, I will explain some success and failure tales of deep models by characterizing their intertwined aspects of robustness, interpretability and fairness. I will then present solutions to provably mitigate these multifaceted issues in deep models.