The paper presents a two-stage framework to improve long-tail class incremental learning, a method that allows a model to learn new classes progressively while preventing catastrophic forgetting. This approach addresses the under-representation of tail classes in long-tail class incremental learning by achieving classifier alignment using global variance and class prototypes. This process captures class properties without needing data balancing or additional layer tuning. This framework can be integrated with any class incremental learning method to effectively handle long-tail class incremental learning scenarios. The effectiveness of this approach is demonstrated through experimentation on the CIFAR-100 and ImageNet-Subset datasets.

 

Publication date: 2 Nov 2023
Project Page: https://github.com/JAYATEJAK/GVAlign
Paper: https://arxiv.org/pdf/2311.01227