Classification data for whisking sounds experiment
Description
This dataset contains the results and data used to train a random forest classification model for the identification of experimental conditions based on neuronal data and the recorded sound. This was done as part of the following experiment in the Ph.D. thesis work of B. Efron. We examined if AC neurons encode the sound generated when mice whisk against different objects. Therefore, we recorded neuronal activity (from well-isolated units as well as MUA without discriminating based on spike waveform) from AC of head-fixed mice (n = 5), running on a rotating treadmill, allowing their whiskers to make contact with aluminum foil, attenuated foil, and no object. To diminish the potential confounding involvement of the somatosensory system in the activation of neurons in the AC upon whisking against objects, we severed the mouse's infraorbital nerve (ION), which conveys tactile sensations from the whisker pad and cheek. To discern if changes in firing rates were due to motor commands, we compared neuronal activity elicited by aluminum foil to that of attenuated foil, where the major difference between these two objects is the sound that is produced upon whisking. The motor switched between objects (aluminum, attenuated, and no object) at fixed intervals, moving them toward the whiskers, and whisking behavior was monitored with a high-speed camera. To explore the possibility that neurons in AC collectively encode whisking states and object identity, we applied a non-linear classification model. More specifically, we sought to determine if we could train a model to decode these different conditions from the neuronal population activity of AC neurons. We employed random forest classifiers based on neuronal activity and analyzed the interplay between object types, whisking states, and neuronal activity. We trained the classifier on 80% of the data, while the remaining 20% were used for testing. Data was binned into an 800 ms bin sliding window with 400 ms steps. The files contain the results from the classification model and the data used to train and test the model. There are two types of models: one based on the neuronal data and the other based on extracted features from ultrasonic sound recordings. In each .mat file, there is a main model and a sub-model structure. The main model was trained on data from whisking epochs with the three objects and the data from no whisking times (across all objects). The sub-model was trained only on data from the no-whisking times and classified the three objects. There is one file for each type of model (FR and sound features) for each mouse and another file with the pooled data from all mice only for the sound-based classifier. The folder 'Data for model' contains the data files and the code to run the model; some minor adjustments to the code are needed, specifically with the path to save the results. The model was run on an HPC system.