The article introduces a new approach called RGBGrasp for robotic object grasping. Unlike previous methods that relied on point-cloud cameras or abundant RGB visual data, RGBGrasp uses a limited set of RGB views to perceive the 3D surroundings. This includes transparent and specular objects, achieving accurate grasping. The method uses pre-trained depth prediction models to establish geometry constraints, allowing for precise 3D structure estimation even under limited view conditions. Additionally, the article mentions the integration of hash encoding and a proposal sampler strategy to speed up the 3D reconstruction process. The method has been validated through experiments and has shown success in a wide range of object-grasping scenarios.

 

Publication date: 29 Nov 2023
Project Page: https://sites.google.com/view/rgbgrasp
Paper: https://arxiv.org/pdf/2311.16592