EEG_bafa_McGurk_data

Published: 8 December 2017| Version 1 | DOI: 10.17632/yydw84284f.1
Contributor:
Antoine Shahin

Description

This EEG continuous data (19 subjects, down-sampled from 1024 Hz to 250 Hz) is associated with an experimental design whereby individuals watched silent videos of a speaker uttering the consonant vowel /ba/ or /fa/ that were mixed and matched with audio files of /ba/ and /fa/. This created congruent (e.g., audio /ba/ combined with video /ba/) and incongruent (e.g., audio /ba/ combined with video /fa/) audiovisual videos. There were also conditions with audio-only and video-only . The subjects numbers were 5, 6, 7, 9, 10, 11, 12, 13, 14, 15, 16, 18, 19, 20, 21, 22, 23, 24, 25, 26. Data from subjects 1-5 were pilot subjects that performed different tasks. Data from subject 8 are excluded because the subject indicated a language deficit. Data from subject 17 was not collected because of a technical glitch. The triggers were as follows Trigger 10 represented the onset of the video /ba/ Trigger 20 represented the onset of the video /fa/ Trigger 30 represented the onset of a static video (no mouth movements) Trigers 11, 12, 13 represented the onset of acoustic /ba/ CV (actual acoustic /ba/ occurred 50 ms after these triggers). Trigers 14, 15, 16 represented the onset of acoustic /fa/ CV. Trigger 1 indicated a response that the sound was perceived as /ba/. Trigger 2 indicated a response that the sound was perceived as /fa/.

Files

Categories

Big Data

Licence