This study introduces ICGNet, an architecture designed for object-centric grasping in robotics. The method takes pointcloud data from a single arbitrary viewing direction as input and generates an instance-centric representation for each partially observed object in a scene. This representation is used for object reconstruction and grasp detection in cluttered table-top scenes. The paper demonstrates the effectiveness of the proposed method through evaluations against state-of-the-art methods on synthetic datasets, showing superior performance for grasping and reconstruction. The method also shows real-world applicability by decluttering scenes with varying numbers of objects.

 

Publication date: 19 Jan 2024
Project Page: not provided
Paper: https://arxiv.org/pdf/2401.09939