Prompting a Pretrained Transformer Can Be a Universal Approximator
The paper delves into the theoretical understanding of fine-tuning methods such as prompting and prefix-tuning of transformer models. It posits that these methods can universally approximate sequence-to-sequence functions, and that…
Continue reading