The study focuses on overcoming the complexity of long-horizon robot planning problems by autonomously learning logic-based relational representations from raw, high-dimensional robot trajectories. The researchers developed an approach that learns symbolic predicates and actions from continuous demonstrations without a priori labeling. The learned models can be used to scale planning algorithms to tasks previously out of reach without hand-crafted abstractions. This method could potentially enable more generalizable and scalable robot planning.

 

Publication date: 19 Feb 2024
Project Page: https://arxiv.org/abs/2402.11871v1
Paper: https://arxiv.org/pdf/2402.11871