Events

Tackling the Complexity of Modern Machine Learning

Yasaman Bahri, Research Scientist at Google Research, Brain Team

Apr 14, 2022

Deep neural networks are a rich family of function approximators ubiquitous across many domains leveraging machine learning. Our understanding of their function, design, and limitations, however, is less well-developed. Can we quantitatively characterize the important aspects of deep learning and develop an understanding of its complex design space? In this talk, I describe some of my research in building foundations for deep learning based on three related threads. First, I describe exact connections between deep neural networks, in the limit of infinitely-wide hidden layers, with new classes of Gaussian processes and kernel methods. Second, I discuss an equivalence between wide, deep neural networks and linear models as well as characterize a nonlinear regime where the equivalence breaks. Third, I discuss scaling trends for the performance of supervised deep learning in practice. Building off of these threads, I highlight areas for further research in core machine learning as well as a few promising application areas for machine learning in physical science.

Speaker Bio

Yasaman Bahri is a Research Scientist at Google Brain. Her research interests are in machine learning as well as its application to physical science; her recent work has focused on building foundations for deep learning. Prior to joining Google Brain, she completed her Ph.D. in Physics (2017) at UC Berkeley, in the area of quantum condensed matter theory. Her undergraduate studies were also at Berkeley, where she received B.A. degrees in Physics and Mathematics with highest honors. She is a recipient of the NSF Graduate Fellowship and the Rising Stars Award in EECS.

Contact
Host:

Sham Kakade

Contact:

Jessica Brenn