This paper presents AMPLIFY, a new Mixup method for data augmentation that minimizes the impact of noise and aberrant features in original samples on prediction results. Unlike common Mixup methods, AMPLIFY does not increase additional trainable parameters and has a low computational cost. It leverages the Attention mechanism of the Transformer model, proving effective in text classification tasks on seven benchmark datasets. The method offers new approaches to enhance the performance of pre-trained models like BERT, ALBERT, RoBERTa, and GPT.

 

Publication date: 25 Sep 2023
Project Page: https://github.com/kiwi-lilo/AMPLIFY
Paper: https://arxiv.org/pdf/2309.12689