The paper presents a combined task-level reinforcement learning and motion planning framework to solve a multi-class in-rack test tube rearrangement problem. The framework uses reinforcement learning at the task level to infer a sequence of swap actions, ignoring robotic motion details. At the motion level, the framework plans the detailed robotic pick-and-place motion. The framework uses a Dueling Double Deep Q Network (D3QN) for training efficiency and an A-based post-processing technique to amplify the collected training data. The framework is verified through simulations and real-world studies.

 

Publication date: 19 Jan 2024
Project Page: Not provided
Paper: https://arxiv.org/pdf/2401.09772