This academic paper discusses the complexities associated with choosing the right state representation for reinforcement learning (RL) in robot control, focusing on a specific task: antipodal and planar object grasping. The authors explore a range of state representation abstractions, from complete system knowledge to image-based representations with decreasing levels of task-specific knowledge. The results indicate that RL agents using numerical states can perform comparably to non-learning baselines. Agents using image-based representations from pre-trained environment embedding vectors outperform end-to-end trained agents. The authors suggest that task-specific knowledge is necessary for achieving high success rates in robot control.

 

Publication date: 22 Sep 2023
Project Page: https://github.com/PetropoulakisPanagiotis/igae
Paper: https://arxiv.org/pdf/2309.11984