This paper presents a new approach to multi-label text classification (MLTC) that doesn’t require extensive data annotation. The method uses a pre-trained language model to map input text into a set of preliminary label likelihoods. A label dependency graph is then calculated and used to update the preliminary label likelihoods. This approach has demonstrated effective performance under low supervision settings and outperformed the initial performance of the pre-trained language model by 70%.
Publication date: 26 Sep 2023
Project Page: https://github.com/muberraozmen/BNCL
Paper: https://arxiv.org/pdf/2309.13543
Leave a comment