Text-to-Code Generation with Modality-relative Pre-training
The study focuses on the use of pre-trained language models for text-to-code generation. The researchers explore how sequence tokens can be adapted and represented differently depending on their modality. They…
Continue reading