The article discusses the use of Prompt Tuning, a scalable and cost-effective method, in fine-tuning Pretrained Language Models (PLMs) for multi-label text classification. This technique is applied to classify companies into an investment firm’s proprietary industry taxonomy. The study addresses the limitations of text-to-text classification with PLMs when applied to multi-label classification where each label consists of multiple tokens. The results show that replacing the PLM’s language head with a classification head improves performance significantly and reduces computational costs during inference.

 

Publication date: 21 Sep 2023
Project Page: https://arxiv.org/abs/2309.12075v1
Paper: https://arxiv.org/pdf/2309.12075