The article discusses a framework for learning contact-rich manipulation skills that are crucial for robots. The learning process is challenging due to data inefficiency in real world applications and the sim-to-real gap in simulations. To address these challenges, the authors propose a hybrid offline-online framework. In the offline phase, model-free reinforcement learning is used to learn robot motion and compliance control parameters in simulation. In the online phase, the residual of the compliance control parameters is learned to maximize robot performance. The framework’s effectiveness is demonstrated through tasks like assembly, pivoting, and screwing.

 

Publication date: 16 Oct 2023
Project Page: https://sites.google.com/view/admitlearn
Paper: https://arxiv.org/pdf/2310.10509