Kinematic priming of action predictions

Published: 6 April 2023| Version 2 | DOI: 10.17632/m6s3r6fzzs.2
Contributors:
,
,
,
,
,

Description

PrimedActionCategorization.csv Kinematic primes consisted of 30 grasp-to-pour and 30 grasp-to-drink movements. On each trial, participants observed either a grasp-to-drink or grasp-to-pour prime, followed by a pouring or drinking action probe. The action probe consisted of a photograph of a male agent drinking or pouring. We manipulated the relationship between the kinematic prime and the action probe on the intention dimension so that it was congruent in 75% of the trials and incongruent in 25% of the trials. Participants were asked to indicate whether the agent depicted in the action probe drank or poured. Participants then performed three blocks of 80 trials (60 congruent, 20 incongruent trials), for a total of 240 trials. Eye movements were monitored using an EyeLink 1000 Plus desk-mounted eye tracker. Each row corresponds to a trial, with columns for the following data. Subject: a number identifier for each subject Video: a number identifier for each video Prime_Intention: the intention of the reach-to-grasp kinematic prime Probe_Intention: the intention of the action probe Congruency: the relationship between the kinematic prime and action probe Accuracy: correct probe intention discrimination choice of the subject (0/1) RT: the response time of the subject (ms) Fixation_Area: the quadrant on the probe image that the participant first fixated Fixation_Relevant: if the first fixation is directed at the region of the probe that contained task-relevant information (0/1) Readout: intention information read out by perceivers from single-trial movement kinematics Encoding: intention information encoded in single-trial movement kinematics IntentionDiscrimination.csv One hour after completing the primed action categorization task, participants performed an intention discrimination task on the stimuli used as kinematic primes in the primed action categorization task. Task structure conformed to a 2AFC design. Each trial displayed a reach-to-grasp act, participants were asked to indicate the intention of the observed reaching act. After response, participants were requested to rate the confidence of their choice on a four-level scale. Participants performed 4 blocks of 60 trials, for a total of 240 trials. Each video was viewed once in each block. Each row corresponds to a trial, with columns for the following data. Subject: a number identifier for each subject Video: a number identifier for each video Video_Intention: the intention of the reach-to-grasp act displayed Accuracy: correct intention discrimination choice of the subject (0/1) RT: the response time of the subject (ms) Conf_Rating: the confidence rating of the subject (1-4) Readout: intention information read out by perceivers from single-trial movement kinematics Encoding: intention information encoded in single-trial movement kinematics Read_Intention: the intention predicted by the readout model Read_Accuracy: correct intention choice prediction of the readout model (0/1)

Files

Categories

Kinematics, Priming, Human Cognition, Intention

Licence