The paper discusses the limitations of current active learning approaches and proposes a new method that is model agnostic and doesn’t require an iterative process. The authors aim to use self-supervised learnt features for active learning, allowing useful feature representation of the input data without annotation. They also discuss Momentum Contrastive Learning (MoCo), which trains a visual representation encoder by matching an encoded query to a dictionary of encoded keys using a contrastive loss. The experiments will be performed on the CIFAR-10 dataset.

 

Publication date: 3 Jan 2024
Project Page: https://arxiv.org/abs/2401.01690v1
Paper: https://arxiv.org/pdf/2401.01690