The article discusses the lack of effective user participation in current language model-driven agents due to the vagueness commonly found in user instructions. To address this, the authors introduce Intention-in-Interaction (IN3), a novel benchmark designed to inspect users implicit intentions through explicit queries. They also propose the incorporation of model experts in agent designs to enhance user-agent interaction. Using IN3, they train Mistral-Interact, a model that assesses task vagueness, inquires user intentions, and refines them into actionable goals. The results show that their approach excels at identifying vague user tasks, recovering and summarizing missing information, setting precise agent execution goals, and boosting overall efficiency.

 

Publication date: 14 Feb 2024
Project Page: https://github.com/HBX-hbx/Mistral-Interact
Paper: https://arxiv.org/pdf/2402.09205