Behavior Research Method - OMEXP: validation data
Description
In the article Streamlining Experiment Design in Cognitive Hearing Science using OpenSesame in Behavior Research Method, we introduce a set of features built on top of the opensource Opensesame platform to allow the rapid implementation of custom behavioural and cognitive hearing science tests. Our integration includes seven new plugins, available in Github (https://github.com/elus-om/BRM_OMEXP) and as a python package (https://pypi.org/project/opensesame-plugin-omexp/): - Audio Mixer and Calibration - LSL start, LSL message and LSL stop - Adaptive init and Adaptive next. For clarity we refer to the OpenSesame platform enhanced with these plugins as the Oticon Medical Experiment Platform (OMEXP). Audio Mixer and Calibration plugins allows to play infinite audio files (limited by the memory of the computer running OpenSesame) on an unlimited number of audio channels each set at a specific sound pressure level, with a specific timing. The LSL series of plugins allow the recording of synchronous input data streams from various devices, whereas the adaptive init and next plugins provides the implementation of an adaptive procedure. In the above-mentioned manuscript, we exemplify the capabilities of the new plugins using the three-alternative forced choice (3-AFC) amplitude modulation detection test (AMDT), available in this folder as 3afc_am_experiment.osexp. 3-AFC AMDT implementation is shown step-by-step in the journal article. This folder contains the validation data that have been recorded and used to provide the platform behaviour validation and the performance timing characterization of the new introduced plugins. The validation data include: (i) xdf files, containing the lab streaming layer (LSL) recording, windows audio (coming from a closed-loop set up) and marker streams, (ii) csv files, that contains the experiment data.
Files
Steps to reproduce
Once Opensesame has been installed, OMEXP plugins can be integrated following the instruction in the GitHub page. We used the results coming from the 3afc_am_experiment.osexp as validation data. The test does not need any user input because it simulates the response of the test participant. We run it 10 times to mimic 10 test participants. We aim to characterize the timing delays and jitter of OMEXP’s audio playback and LSL stream acquisition. During the 3-AFC AMDT experiment we recorded two LSL streams: the Logger and the audio stream. The audio was acquired using the AudioCaptureWin application available in the LSL page. AudioCaptureWin makes the computer’s microphone input available for recording over LSL, and we close-loop our setup by wiring the output of the Audio Mixer output channel to the computer’s input microphone via an external USB audio interface (ESI Maya 44 USB+). The Logger Outlet stream saves the annotations within the LSL xdf file. The data used in the performance characterization described in Streamlining Experiment Design in Cognitive Hearing Science using OpenSesame in Behavior Research Method are saved in the xdf files. Regarding the platform behaviour validation, the experiment variables saved in the csv files were used. The csv files are the saved automatically in the end of the test by Opensesame.