A Multimodal Dataset for Music Information Retrieval of Under Resourced Languages

Published: 13 May 2024| Version 5 | DOI: 10.17632/4dnv9kvrz3.5
Contributors:
Osondu Oguike,

Description

The dataset is a multimodal dataset for music information retrieval of under-resourced Sotho-Tswana language. The MIR tasks for which the dataset can be used for, include sentiment analysis, genre classification, emotion/mood prediction, and recommender systems. Although the primary source of the musical videos is YouTube, several processes have been performed on the dataset, such as segmentation, separation of the audio modality from the visual modality, and extraction of spectral-based acoustic features. The dataset is ideal for training deep learning models, and the recommended fusion method for combining the features of the audio and visual modalities is the decision-level (late) fusion method. In addition to the musical video clips, part of the dataset includes various CSV files and Jupyter notebooks.

Files

Steps to reproduce

The raw musical videos were downloaded from YouTube, the audio modality was separated from the visual modality, and spectral based acoustic features were extracted from the audio modality. The segments of the video clips were split into various images/frames, which will be used for training. In order to use the dataset, follow the following steps. Repository name: Mendeley Data Data identification number: 10.17632/4dnv9kvrz3.1 Direct URL to data: https://data.mendeley.com/datasets/4dnv9kvrz3/2 Instructions for accessing these data: The raw musical videos were downloaded from YouTube, the audio modality was separated from the visual modality, and spectral based acoustic features were extracted from the audio modality. The segments of the video clips were split into various images/frames, which will be used for training. To use the dataset, follow the following steps. 1. The raw musical videos were downloaded using video downloader site, savefrom.net 2. Each downloaded video was split into fifteen seconds equal video segment, using the Jupyter notebook, Split 3. Because the downloaded video clips have different durations, use Jupyter notebook, SplitVideo.jpynb, together with the text file, split1.txt, to split each of the downloaded video clip into equal fifteen seconds segments of video clips. This will ensure that all the video clips are in the same level playing field. 4. Change the name of each segment of the video clip so that it will be the same as the corresponding name in the VideoSegment.csv file. 5. Store all the segmented video clips in a new folder called Video_Clips. 6. Use the Jupyter notebook, separate.jpynb, to separate the audio modality of each of the segmented video clip, from the video. 7. Change the name of each segment of the audio file so that it will be the same as the corresponding name in the csv file, AudioSegments.csv. 8. Store all the segments of audio file in a new folder called Audio_Clips. 9. Use the Jupyter notebook, Video_Images.jpynb to generate the frames/images from the segmented video clips. This will generate the csv file called Video_Images.csv. Due to the limitation of computing resources, when training the deep learning models, you may not use all the segmented video clips and segmented audio clips. If you have decided to use some of them, you need to delete the ones that you are not using from these csv files, VideoSegment.csv and AudioSegments.csv. However, if you are using a powerful computer system, you can use all the segmented video clips and audio clips. 10. Store all the generated frames/images in a new folder called Video_Frames. 11. Based on the MIR task, use appropriate deep learning models to train the audio, textual and visual modalities of the dataset, with late fusion method.

Institutions

University of Johannesburg, University of Nigeria Faculty of Engineering

Categories

Computer Science, Artificial Intelligence, Deep Learning

Licence