4 datasets for Cortex
Contributors: Robert Wilder, Amy Goodwin Davies, David Embick
... In this data set, we include the data for two auditory lexical decision experiments which are presented in the paper entitled "Differences between morphological and repetition priming in auditory lexical decision: Implications for decompositional models". Included with the data are the minimally processed raw data files, the R scripts used to prepare, trim, visualize, and model the data (using linear mixed-effects models), the model outputs themselves, and the graphs generated from the data. Please refer to the file_descriptions.pdf file for more details on the data itself. To start examining the data, please refer to the scripts labeled 1_exp1_data_prep.R and 1_exp2_data_prep.R . Following through to the other R scripts in the /scripts/ folder takes the reader through the analyses presented in the paper. These analyses read in CSV files from the /data/input/ folder and process them to create exp1_prep.csv, exp2_prep.csv, exp1_trim.csv, and exp2_trim.csv. These trimmed data sets are then used in the later scripts in the /scripts/ folder to generate the models found in the /scripts/data/ folder (the HTML output of which are in the /scripts/models/ folder) and the graphs found in the /scripts/graphs/ folder. The final scripts entitled exp1_models.R and exp2_models.R generate the models are found in /scripts/data/. Please direct comments about the data-set to the corresponding authors of the associated paper, Robert J. Wilder (firstname.lastname@example.org) and Amy Goodwin Davies (email@example.com).
Contributors: Elvio Blini, Caroline Tilikete, Alessandro Farnè, Fadila Hadj-Bouziane
... Data, full analysis pipeline and scripts, extra graphical depictions for the paper titled: "Probing the role of the vestibular system in motivation and reward-based attention" by Elvio Blini, Caroline Tilikete, Alessandro Farnè, and Fadila Hadj-Bouziane. For information contact: elvio.blini (at) gmail.com
The functional subdivision of the visual brain: Is there a real illusion effect on action? A multi-lab replication study (data for Cortex RR)
Contributors: Karl Kopiske
... Data and analyses for the paper 'The functional subdivision of the visual brain: Is there a real illusion effect on action? A multi-lab replication study', published as a Registered Report in Cortex. Authors: Karl K. Kopiske, Nicola Bruno, Constanze Hesse, Thomas Schenk, Volker H. Franz ABSTRACT - It has often been suggested that visual illusions affect perception but not actions such as grasping, as predicted by the "two-visual-systems" hypothesis of Milner & Goodale (1995, The Visual Brain in Action, MIT press). However, at least for the Ebbinghaus illusion, relevant studies seem to reveal a consistent illusion effect on grasping (Franz & Gegenfurtner, 2008. Grasping visual illusions: Consistent data and no dissociation. Cognitive Neuropsychology). Two interpretations are possible: either grasping is not immune to illusions (arguing against dissociable processing mechanisms for vision-for-perception and vision-for-action), or some other factors modulate grasping in ways that mimic a vision-for perception effect in actions. It has been suggested that one such factor may be obstacle avoidance (Haffenden Schiff & Goodale, 2001. The dissociation between perception and action in the Ebbinghaus illusion: Nonillusory effects of pictorial cues on grasp. Current Biology, 11, 177-181). In four different labs (total N=144), we conducted an exact replication of previous studies suggesting obstacle avoidance mechanisms, implementing conditions that tested grasping as well as multiple perceptual tasks. This replication was supplemented by additional conditions to obtain more conclusive results. Our results confirm that grasping is affected by the Ebbinghaus illusion and demonstrate that this effect cannot be explained by obstacle avoidance.
Contributors: Jingyi Geng, tatiana schnur
... There are two general views regarding the organization of object knowledge. The featurebased view assumes that object knowledge is grounded in a widely distributed neural network in terms of sensory/function features (e.g., Warrington & Shallice, 1984), while the category-based view assumes in addition that object knowledge is organized by taxonomic and thematic categories (e.g., Schwartz et al., 2011). Using an fMRI adaptation paradigm, we compare predictions from the feature- and category-based views by examining the neural substrates recruited as subjects read word pairs that are identical, taxonomically related, thematically related or unrelated while controlling for the function features involved across the two categories. The feature-based view predicts that adaptation in function regions (i.e., left posterior middle temporal lobe, left premotor cortex) should be observed for related word pairs regardless of the taxonomic/thematic categories. In contrast, the category-based view generates the prediction that adaptation in the bilateral anterior temporal lobes should be observed for taxonomically related word pairs and adaptation in the left temporo-parietal junction should be observed for thematically related word pairs. By improving upon previous study designs and employing the fMRI adaptation task, this study has the potential to clarify the role of semantic categories and features in the organization of object knowledge.