A Homological Theory of Functions

Greg Yang, Microsoft Research
-
THO 211

I will make precise and prove the following statement: Whether machine learning works is predicated on the homology of what you are trying to learn. More precisely, given a class of (discrete) functions, there is a natural way to construct from it a simplicial complex. I will show that, to learn an unknown function from such a class with random samples of (input, output) pairs, the number of samples needed is upper bounded (tightly) by, roughly speaking, the highest nonvanishing homological degree of the simplicial complex. As a result of this general algebraic/topological framework, we also obtain a method for separating complexity classes that is possibly viable toward open problems like P vs NP. I will discuss some new proofs of old results using this framework. No background other than familiarity of homology is assumed in this talk.