Dance Teaching Video Generation Assisted by Deep Learning with Long Short-Term Memory Networks

Published: 27 August 2024| Version 1 | DOI: 10.17632/gxg39sgyh2.1
Contributor:
Siwen Bi

Description

This study introduces a deep learning model for crafting instructional dance videos, with a focus on ensuring high quality. Long Short-Term Memory (LSTM) networks are employed for time series modeling, incorporating conditional generation techniques to enable the generation of dance movements based on music segments. Bidirectional Long Short-Term Memory (Bi-LSTM) and Mixture Density Networks (MDN) techniques are further integrated to enhance the model’s capacity to capture temporal relationships in dance movements and diversify the generated outcomes. Through empirical validation, the proposed model exhibits significant performance advantages in dance video generation tasks, excelling in accuracy, user subjective ratings, movement smoothness, and dance emotions. Notably, it achieves over 90% accuracy in dance genres like ballet and street dance, surpassing traditional Convolutional Neural Network and Global Average Convolutional Neural Network models. Ablation experiments are conducted, and the model’s computational complexity is analyzed, providing additional confirmation of the effectiveness and performance superiority of the proposed model. In essence, this study contributes by proposing a deep learning model that integrates LSTM networks, conditional generation techniques, Bi-LSTM, and MDN techniques, demonstrating effectiveness in generating high-quality instructional dance videos and bringing innovation to the realms of dance teaching and artistic performance.

Files

Categories

Deep Learning

Licence