The article presents a model for symbolic music generation that is controlled by emotions. This model combines diffusion models with Generative Adversarial Networks (GANs). While diffusion models have been successful in generative tasks with continuous data, they have not been as successful with discrete symbolic music. This work aims to overcome this challenge and control the generation towards a target emotion. It also aims to mitigate the slow sampling drawback of diffusion models applied to symbolic music generation. The results show that the model is successful in controlling the generation of symbolic music with a desired emotion and offers a significant improvement in computational cost.

 

Publication date: 25 Oct 2023
Project Page: Not Provided
Paper: https://arxiv.org/pdf/2310.14040