You are here

Stochastic gradient method: when and why does it work?

Dmitriy Drusvyatskiy (University of Washington)
Monday, May 1, 2023 - 2:30pm to 3:30pm
ECE 026
The stochastic gradient method is the workhorse algorithm for learning from data. Although originally designed for smooth optimization, the method is now routinely used to optimize functions that are neither smooth nor convex. Yet, the stochastic gradient method often works just as well on such highly irregular problems as on those that are smooth. The goal of this talk is to explain this phenomenon. Roughly speaking, we will see that minimizers of typical problems lie on a distinguished smooth manifold which endows the stochastic gradient method with smooth dynamics up to a small order error. This viewpoint then leads to several consequences such as local rates of convergence, asymptotic normality of averaged iterates, and saddle point avoidance.
Event Type: 
Event Subcalendar: 
Share