The article presents a novel fine-tuning paradigm called BayesPrompt. It is designed to improve pre-trained language models’ (PLMs) performance on downstream tasks. The existing problem is that prompt-tuning methods fail to generalize to specific few-shot patterns. This is primarily due to the over-multitudinous conceptual knowledge contained in PLMs and the incomplete knowledge for target downstream domains. BayesPrompt addresses this by approximating the complete target domains of downstream tasks in a debiased manner and generating discriminative prompts, providing clear guidance for PLMs. The approach has proven effective, achieving state-of-the-art performance on benchmarks.

 

Publication date: 25 Jan 2024
Project Page: https://arxiv.org/abs/2401.14166v1
Paper: https://arxiv.org/pdf/2401.14166