The paper introduces Self-guided Masked Autoencoders (SMA), a method for self-supervised learning that works without domain-specific assumptions. It uses an attention-based model with a masked modeling objective to learn masks to sample. The authors tested SMA on three self-supervised learning benchmarks in protein biology, chemical property prediction, and particle physics. They found that SMA could learn representations without domain-specific knowledge and achieved state-of-the-art performance on these benchmarks.

 

Publication date: 23 Feb 2024
Project Page: https://arxiv.org/abs/2402.14789v1
Paper: https://arxiv.org/pdf/2402.14789