Data for: MultiMICS
Initially, we have congregated a lot of speech data of the students and the staff members’ speeches from our organisation for our experiment. It has been performed using a self-made python program with the help of SoundDevice module, which takes the live audio of the persons, converts it into the numerical data. Finally, we have accumulated the audio clips in the individual user_name folders, saved it as a CSV file format for future processing. We have analysed over the 108 audio-clip contained dataset. The wave views of some speech samples extracted from our dataset. We have demonstrated the extracted spatial features (MFCC) from the speech samples. We have used ~84 data sample for information training resolutions and the residual data ~33, that has been utilised for testing purposes. ESN-driven Music Processing data and several outcomes have additionally been uploaded in this content.