This article presents a system called Distillation and Retrieval of Online Corrections (DROC) that utilizes human language corrections to enhance robot performance in novel environments. The system, powered by a large language model (LLM), can respond to various forms of feedback, distill knowledge from these corrections, and retrieve relevant past experiences based on textual and visual similarities. The authors demonstrate that DROC outperforms other techniques, requiring fewer corrections and iterations to achieve optimal performance.

 

Publication date: 20 Nov 2023
Project Page: https://sites.google.com/stanford.edu/droc
Paper: https://arxiv.org/pdf/2311.10678