The study proposes a new method, Imprecise Bayesian Continual Learning (IBCL), to address task trade-offs in continual learning. Unlike existing algorithms, IBCL does not require additional training overhead to generate models that address specific preferences. The method updates a knowledge base and obtains specific models with zero-shot, meaning it does not require additional training to generate preference-addressing models. Experimental results show that IBCL improves task accuracy and significantly reduces training overhead.

 

Publication date: 4 Oct 2023
Project Page: https://arxiv.org/abs/2310.02995v1
Paper: https://arxiv.org/pdf/2310.02995