The study ‘Preference-Conditioned Language-Guided Abstraction’ proposes a new way to improve learning from demonstrations in robots. It addresses the problem of spurious feature correlations that often occur in such learning methods. The researchers propose using language models to query for user preferences directly when a change in behavior is noticed. The language model is also used to construct state abstractions based on the most likely preference. The model can even ask the human directly when uncertain about its own estimate. The effectiveness of this framework was demonstrated through simulated experiments, a user study, and a real Spot robot performing tasks.

 

Publication date: 6 Feb 2024
Project Page: https://doi.org/10.1145/3610977.3634930
Paper: https://arxiv.org/pdf/2402.03081