This paper introduces Self-Adaptive Robust Attention for Robotics Transformers (SARA-RT), a new method for addressing the emerging challenge of scaling up Robotics Transformers (RT) for on-robot deployment. The authors propose a new fine-tuning method called up-training. This method converts pre-trained or already fine-tuned Transformer-based robotic policies into their efficient linear-attention counterparts while maintaining high quality. The effectiveness of SARA-RT is demonstrated by speeding up the class of recently introduced RT-2 models and Point Cloud Transformer (PCT) robotic policies. The authors complement their results with a rigorous mathematical analysis for deeper insight into SARA.

 

Publication date: 5 Dec 2023
Project Page: Not Provided
Paper: https://arxiv.org/pdf/2312.01990