Auditory Discrimination based on P300 component.
This study focuses on analyzing the electrophysiological and perceptual attention response to six auditory variants (frequency, intensity, and spatiality). Two ranges are used for each parameter: frequencies of 500 Hz and 1000 Hz, intensities of 60 dB SPL and 67 dB SPL, and spatialities of 0º and 90º. The aim is to measure the Event-Related Potential (ERP) and the attention given to the changes between differences in acoustic parameters, in addition to comparing responses across three different stimuli. The justification is based on previous research and standards such as ISO 15006, highlighting the importance of understanding cognitive responses to variations in auditory alerts, especially in vehicular contexts. This knowledge is crucial, as evidenced in studies by the Office of Research, Development, and Technology and the U.S. Department of Transportation. The importance of this study lies in its potential to aid in the design of more effective auditory alerts in critical contexts, such as vehicular safety. The experimental procedure requires an initial audiometry to determine auditory thresholds and meet the inclusion criteria (participants must score a value under 20 dB to ensure they don’t have hearing loss), followed by registering the subject's basal state (which lasts 1 minute) and then exposing them to a series of sounds under controlled conditions (lasting 3 minutes each). In three distinct stages, participants listen to 180 sounds, including 144 frequent sounds and 36 infrequent sounds in each stage, varying in frequency, amplitude, and spatiality in an oddball paradigm. The sequence of these infrequent sounds was randomized. Participants were instructed to count the infrequent sounds they listened to, and to press a key on the keyboard upon detecting each one. After each stage, participants completed a questionnaire, which consists of 6 questions, taking approximately 1 minute maximum and is done between each experimental condition. This questionnaire is taken from a part of the Auditory Processing Domains Questionnaire to determine how many infrequent sounds they detected and assess their level of attention during the experiment. The main hypothesis suggests that an increase in the acoustic characteristics of the sound will generate a more significant attentional response, expecting a P300 amplitude increase. This study is an effort to understand how acoustic variations in auditory alerts may affect cognitive and attentional processes, a field of great relevance in vehicular safety and user interface design.
Steps to reproduce
Electroencephalogram measurements were performed using the GTech UNICORN Hybrid Black EEG system (24-bit amplifier ), OPENVIBE interface and Super Collider IDE with the experimental paradigm programmed in this last software. The participation of each subject generated 4 “.csv” files (“Sx_amp.csv”, “Sx_freq.csv”, “Sx_space.csv”, “Sx_basal.csv”) each corresponding to each auditory variation in the experimental paradigm: amplitude, frequency, and space with a duration of at least 180 seconds equivalent to 45,000 samples, and a file of their basal state for at least 60 seconds equivalent to 15,000 samples. The generated files are in “.csv” format. These files contain an array of Samples x Channels with a measurement every 0.004 seconds equivalent to a Sample Rate (SR) of 250Hz and, according to the Nyquist Theorem, containing signals in a bandwidth (BW) of up to 127.5Hz. The Unicorn system contains 8 electrode positions and 2 reference electrodes to each mastoid bone behind the ears. The positions are Fz, C3, C2, C4, Pz, PO7, Oz, PO8 according to the 10/20 EEG system and corresponds to channels 1, 2, 3, 4, 5, 6, 7, 8 respectively. These files contain no preprocessing and are raw data directly extracted from the OPENVIBE interface with the UNICORN EEG equipment. Additionally, data files contain an Excel file with the participants' answers to attention questionnaire, one per variant (frequency, amplitude and space) with questions from Q1-Q6 listed in the Excel file, as well as the number of the counted infrequent sounds for each participant (RD). This file also contains age, gender and result of normal hearing criteria through audiometry assessment.