This academic article delves into Neural finite-state transducers (NFSTs), an expressive family of neurosymbolic sequence transduction models. The authors discuss the challenges of imputing the latent alignment path that explains a given pair of input and output strings during training. They train three autoregressive approximate models for inferred path amortization, which can then be used as proposal distributions for importance sampling. The paper also examines the expressiveness of NFSTs and their impact on training efficiency.
Publication date: 21 Dec 2023
Project Page: https://arxiv.org/abs/2312.13614
Paper: https://arxiv.org/pdf/2312.13614