This research paper focuses on the convergence rates of loss and uncertainty-based active learning algorithms. The authors first provide conditions for a convergence rate guarantee and apply this to linear classifiers and datasets. They then propose a framework to derive convergence rate bounds for loss-based sampling using stochastic gradient descent algorithms. Lastly, they propose an active learning algorithm that combines point sampling and stochastic Polyak’s step size. They demonstrate a condition for this algorithm that ensures a convergence rate guarantee for smooth convex loss functions. The results have implications for machine learning model training and label acquisition strategies.

 

Publication date: 22 Dec 2023
Project Page: https://arxiv.org/abs/2312.13927
Paper: https://arxiv.org/pdf/2312.13927