Distinguishing fine structure and summary representation of sound textures from neural activity

Published: 25 August 2023| Version 1 | DOI: 10.17632/gx7cb7fnv4.1
Martina Berto


Raw Electroencephalogram (EEG) data and codes for the analyses described in Berto et al., 2023 ("Distinguishing fine structure and summary representation of sound textures from neural activity"). **** This dataset contains EEG data recorded from 24 healthy participants with normal hearing. Participants listened to continuous sound streams and were required to perform an orthogonal task. The continuous sounds consisted of triplets of sounds, where two were identical (repeated) and one was different (novel). The novel sound could vary for its acoustic details (Local Features) or global structure (Summary Statistics). To ensure disentangling between local and summary properties, we employed a computational approach by McDermott and Simoncelli (2011) and create synthetic sounds with similar auditory statistics but different local features and vice-versa. Since summary statistics are computed over time, we manipulated the amount of information by employing sounds of three different durations: short (40ms), medium (209ms), and long (478ms). We expected changes in local features to elicit higher evoked responses at short durations and changes in summary statistics at long ones. We also measured whether the temporal resolution at which the change occurs (local vs. summary) was encoded by different temporal scales of neural oscillations.


Steps to reproduce

Stimuli: Sythetic sounds were produced using the synthesis algorithm by McDermott and Simoncelli (2011), available here: http://mcdermottlab.mit.edu/downloads.html The pool of available sound excerpts (n= 47,953) of each different sound duration employed in our EEG experiment is available in the folder: Synthetic excerpts.zip EEG: Data were collected with an EGI HydroCel Geodesic Sensor Net with 65 EEG channels and a Net Amps 400 amplifier (Electrical Geodesics, Inc., EGI, USA). The acquisition was recorded with EGI’s Net Station 5 software (Electrical Geodesics, Inc., EGI, USA). The analyses of the EEG data were performed with MATLAB, using EEGLAB and Fieldtrip (versions: 2020). Raw data and processing scripts are available in the corresponding folders for reproducibility. Each script contains information on how to use it and its dependencies.


IMT Institute for Advanced Studies


Computational Modeling, Auditory System, Electroencephalography, Computational Acoustics, Auditory Evoked Response