The research paper presents a new framework called Tactile Adaptation from Visual Incentives (TA VI) that enhances tactile dexterity in robots using vision-based rewards. This is achieved by first learning visual representations through a contrastive-based objective. Then, a reward function is constructed using these visual representations. This is done through optimal-transport based matching on one human demonstration. Finally, the robot is optimized using online reinforcement learning that maximizes the visual reward. The implementation of this framework on a four-fingered Allegro robot hand has shown significant improvement in performance in tasks such as picking and placing pegs, unstacking bowls, and flipping slender objects.

 

Publication date: 22 Sep 2023
Project Page: https://see-to-touch.github.io/
Paper: https://arxiv.org/pdf/2309.12300