Phonological Prediction Data
The eye-tracking technique and printed-word version of the visual world paradigm were used to examine whether the phonological forms of incoming words would be predicted during speech comprehension. Participants were instructed to listen to Mandarin Chinese spoken sentences while looking at two single-character printed words on the screen, with one being the critical word and the other being an unrelated distractor word. Each of the spoken sentences includes a highly predictable target word. The phonological relationship between the predictable target word and the critical word was systematically manipulated. Four types of the printed critical words were included: predicted Target Word itself, Homophone Competitor (sharing the same pronunciation “consonant-vowel-lexical tone” with the target word), Tonal Competitor (only sharing the same lexical tone with the target word), or an Unrelated Word. One group of participants were required to perform a “word judgment” task (whether the spoken sentence mentions any of the words in the visual scene) in experiment 1a (exp1a_evs.rar), while another group of participants were required to perform a “pronunciation judgment” task (whether the spoken sentence mentions any of the pronunciation of the words in the visual scene) in experiment 1b (exp1b_evs.rar). In order to explore the underlying mechanism of phonological preactivation, particularly whether the phonological information is preactivated automatically or strategically, the third group of participants were required to perform “word judgment” task (exp2_task1_evs.rar) in one block and “pronunciation judgment” task (exp2_task2_evs.rar) in another block. The order of the task was counterbalanced across participants. In addition, only two types the critical words were included: phonological competitors and unrelated words.