The article introduces Q-probing, a method that adapts a pre-trained language model to maximize a task-specific reward function. This approach sits between heavier methods like finetuning and lighter ones like few-shot prompting. The idea is to learn a simple linear function on a model’s embedding space to reweight candidate completions. The authors theoretically show that this sampling procedure is equivalent to a KL-constrained maximization of the Q-probe as the number of samples increases. With this technique, they see gains in domains with ground-truth rewards such as code generation, even outperforming finetuning in data-limited regimes.

 

Publication date: 23 Feb 2024
Project Page: https://github.com/likenneth/q_probe
Paper: https://arxiv.org/pdf/2402.14688