The study explores invariance in neural networks, a feature necessary for many tasks. The authors propose measures to quantify the invariance of neural networks in terms of their internal representation. These measures are efficient, interpretable, and can be applied to any neural network model. They are also more sensitive to invariance than previously defined measures. The measures were validated in the domain of affine transformations and the CIFAR10 and MNIST datasets, including their stability and interpretability. The measures were also used to analyze CNN models, showing that their internal invariance is remarkably stable to random weight initializations, but not to changes in dataset or transformation.

 

Publication date: 27 Oct 2023
Project Page: Not provided
Paper: https://arxiv.org/pdf/2310.17404