The study investigates the robustness of Large Pre-Trained Language Models (LLMs) for spoken task-oriented dialogues. It highlights that while LLMs have shown excellent performance in written dialogues, their effectiveness in spoken interactions is not well known. The researchers used a state-of-the-art ASR engine to transcribe spoken dialogues and evaluated the performance of LLMs on these dialogues. The study found that LLMs are not inherently robust to spoken noise. However, when fine-tuned or trained on a proper dataset of spoken task-oriented dialogues, they can achieve more robust performance.

 

Publication date: 4 Jan 2024
Project Page: https://arxiv.org/abs/2401.02297
Paper: https://arxiv.org/pdf/2401.02297