33 Oxford Street, Cambridge, MA 02138

View map

Despite the success of deep learning, there remain challenges to progress. Deep models require vast datasets to train, can fail to generalize under surprisingly small changes in domain, and lack guarantees on performance. Incorporating symmetry constraints into neural networks has resulted in models called equivariant neural networks (ENN) which have helped address these challenges. I will discuss several successful applications, such as trajectory prediction, ocean currents forecasting, and robotic control. However, there are also limits to the effectiveness of current ENNs.  In many applications where symmetry is only approximate or does apply across the entire input distribution, equivariance may not be the correct inductive bias to aid learning and may even hurt model performance.  I will discuss recent work theoretically characterizing errors that can result from mismatched symmetry biases which can be used for model selection. I will also suggest different methods for relaxing symmetry constraints so that approximately equivariant models can still be used in these situations.