About this Event
Deep neural networks are a rich family of function approximators ubiquitous across many domains leveraging machine learning. Our understanding of their function, design, and limitations, however, is less well-developed. Can we quantitatively characterize the important aspects of deep learning and develop an understanding of its complex design space? In this talk, I describe some of my research in building foundations for deep learning based on three related threads. First, I describe exact connections between deep neural networks, in the limit of infinitely-wide hidden layers, with new classes of Gaussian processes and kernel methods. Second, I discuss an equivalence between wide, deep neural networks and linear models as well as characterize a nonlinear regime where the equivalence breaks. Third, I discuss scaling trends for the performance of supervised deep learning in practice. Building off of these threads, I highlight areas for further research in core machine learning as well as a few promising application areas for machine learning in physical science.
Event Details
Dial-In Information
Join Zoom meeting
https://harvard.zoom.us/j/97695974179?pwd=NEdNam9BRkVNUThZWVQ5R1JlaW5tZz09
Password: 698772
Join by telephone (use any number to dial in)
+1 929 436 2866
+1 301 715 8592
+1 312 626 6799
+1 669 900 6833
+1 253 215 8782
+1 346 248 7799