The researchers investigate the problem of learning equivariant neural networks through gradient descent. Despite the known symmetries improving the performance of learning pipelines in fields like biology and computer vision, the study reveals that learning such networks remains a complex task. The researchers highlight that the lower bounds for shallow graph neural networks, convolutional networks, invariant polynomials, and frame-averaged networks scale either superpolynomially or exponentially in the relevant input dimension. Therefore, even with the inductive bias provided by symmetry, learning the complete classes of functions represented by equivariant neural networks through gradient descent is challenging.

 

Publication date: 3 Jan 2024
Project Page: https://arxiv.org/abs/2401.01869
Paper: https://arxiv.org/pdf/2401.01869