Video Samples for "D2MNet for Music Generation Joint Driven by Facial Expressions and Dance Movements"

Published: 12 April 2024| Version 1 | DOI: 10.17632/cdjfthjmm6.1
Contributors:
Jiang Huang,
,
,

Description

It is worth noting that synthesizing high quality audio itself remains a challenging and computational demanding research problem. At present, shorter music samples are used for training and standard testing in the main experiments. As the length of the generated sample increases, the performance of the model will be affected. Specifically, evaluation metrics of beat correspondence and style-consistency will achieve lower scores. However, our model can also be effectively trained and tested with a longer music sequence length via a relatively larger network with more parameters. And this is also the direction of our future work.

Files

Institutions

Communication University of China

Categories

Deep Learning

Licence