Symmetric tensors are multi-dimensional arrays invariant to permutation of indices; equivalently, they correspond one-to-one with homogeneous multivariate polynomials. In a variety of machine learning tasks, it is popular to decompose a symmetric tensor as a sum of symmetric outer products of vectors.
In this talk, we present a new algorithm for computing low-rank symmetric tensor decompositions. Numerical experiments demonstrate that the method outperforms or matches state-of-the-art methods, per standard performance metrics. We prove some supporting theoretical guarantees, through connections to algebraic geometry and dynamical systems.
Time permitting, we extend the method to compute a certain generalization of symmetric tensor decompositions. This enables estimation of a union of linear subspaces from very noisy point samples.
Joint work with Joao Pereira (Duke).