Although machine learning researchers have introduced a plethora of useful constructions for learning over Euclidean space, numerous types of data in various applications benefit from, if not necessitate, a non-Euclidean treatment. In this talk I cover the need for Riemannian geometric constructs to (1) build more principled generalizations of common Euclidean operations used in geometric machine learning models as well as to (2) enable general manifold density learning in contexts that require it. Said contexts include theoretical physics, robotics, and computational biology. I will cover one of my papers that fits into (1) above, namely the ICML 2020 paper "Differentiating through the Fréchet Mean." I will also cover two of my papers that fit into (2) above, namely the NeurIPS 2020 paper "Neural Manifold ODEs" and the NeurIPS 2021 paper "Equivariant Manifold Flows." Finally, I will briefly discuss directions of relevant ongoing work.
This talk will be hybrid, held in-person and online on Zoom