The system created by the researchers works by converting raw audio files into spectrograms and then using deep learning models to generate lyrics that match the music they processed in real time .
This could help artists compose new lyrics that go well with the music they create.
Researchers from the University of Waterloo have published the results of the study of this computational system in a preprint . According to statements by Olga Vechtomova , one of the researchers who carried out the study:
The objective of this research was to design a system that can generate lyrics that reflect the mood and emotions expressed through various aspects of music, such as chords, instruments, tempo, etc. We set out to create a tool that musicians could use to get inspired to write their own songs.
The system created by the researchers works by converting raw audio files into spectrograms and then using deep learning models to generate lyrics that match the music they processed in real time. The architecture of the model is made up of two variational autocoders, one designed to learn musical audio representations and the other to learn lyrics .
To evaluate the system they developed, Vechtomova and her colleagues conducted a user study in which they asked musicians to play live music and share their feedback on the lyrics created by their system.
Vechtomova and her colleagues are currently working on a final version of the system that artists around the world could easily access, while also trying to design other tools that could improve letter-writing processes.