The article is about AUDIO CONFIT, a two-stage contrastive learning-based approach for fine-tuning pre-trained audio models. AUDIO CONFIT aims to strike a balance between fitting the model to the training data and allowing it to generalize well to a new domain. The method has been shown to be efficient, robust, and generalizable across a variety of audio classification tasks. The authors also analyse the beneficial properties of AUDIO CONFIT representations by investigating isotropy, representational separability, and dimensionality contribution.

 

Publication date: 25 Sep 2023
Project Page: Not provided
Paper: https://arxiv.org/pdf/2309.11895