Lechao Xiao, Google Brain

Zoom  https://washington.zoom.us/j/99239593243
Understanding the fundamental principles behind the massive success of neural networks is one of the most important open questions in deep learning. In this talk, I will share some progress through two recent papers. In the first part, we consider the triplet (data, model, inference algorithm) as an integrated system and discuss their synergies and symmetries. The second part focuses solely on the benefits of the architecture's inductive biases. We explain how the topologies of convolutional networks (locality, hierarchy, model scaling, etc.) can help overcome the curse of dimensionalities in high dimensions.