Improving Semantic Control in Discrete Latent Spaces with Transformer Quantized Variational Autoencoders
This article discusses a novel model, T5VQVAE, which improves semantic control and generation in Transformer-based Variational AutoEncoders (VAEs). By leveraging the controllability of VQVAEs, T5VQVAE guides the self-attention mechanism in…
Continue reading