HAAQI-Net is a deep learning model designed to assess music quality for hearing aid users. It uses a Bidirectional Long Short-Term Memory (BLSTM) with attention and takes a music sample and hearing loss pattern as input to generate a predicted HAAQI score. The model uses pre-trained Bidirectional Encoder representation from Audio Transformers (BEATs) for acoustic feature extraction. HAAQI-Net shows high performance and significantly reduces the inference time from 62.52 seconds (by HAAQI) to 2.71 seconds, providing an efficient music quality assessment model for hearing aid users.
Publication date: 4 Jan 2024
Project Page: Not provided
Paper: https://arxiv.org/pdf/2401.01145