Geometry and Expressivity of Neuromanifolds: Where Algebraic Geometry Meets Neural Networks 

Maksym Zubkov
-
PDL C-38

In this talk, I will give an overview of neuroalgebraic geometry, a new field, analogous to algebraic statistics, that uses algebraic geometry to understand different neural network architectures. Our main object of study will be a neuromanifold and its expressivity. A neuromanifold is defined as an embedding of the parameter space of a fixed neural network into a fixed ambient space of functions, while expressivity is defined as the capability of a neural network to approximate an element in the ambient space arbitrarily well. We will consider two types of activation functions: polynomial and rational. For each of these activation functions, we will construct the Zariski closure of a neuromanifold, called a neurovariety, and show that understanding the geometry of a neuromanifold or neurovariety is equivalent to understanding long-standing and beautiful objects of classical algebraic geometry such as secant varieties, Chow varieties, and others. Finally, we will briefly discuss analogues of neuromanifolds over finite fields and show how simple point-counting can reveal further insights into the expressivity of neural networks and the geometry of a neuromanifold over the complex or real numbers.

 

Event Subcalendar