GAZELOAD: A Multimodal Eye-Tracking Dataset for Mental Workload in Industrial Human–Robot Collaboration
Description
GAZELOAD is a multimodal eye-tracking dataset for mental workload (MWL) estimation in industrial human–robot collaboration (HRC). The data were collected from 26 participants performing collaborative assembly tasks with two cobots (UR5e and Franka Emika Panda) while wearing Meta ARIA Gen-1 smart glasses. The dataset includes synchronized ocular signals (pupil diameter, fixations, saccades, gaze vectors, gaze transition entropy, fixation dispersion index), continuous real-time illuminance measurements, robot/task context logs (task block, bench, induced faults), and self-reported MWL ratings on a 1–10 scale. All modalities were time-stamped using a shared UTC reference. Eye-tracking features are provided as CSV files aggregated into 250 ms windows, organized in participant-specific folders with standardized schemas and detailed documentation. The dataset enables the development and benchmarking of MWL estimation models, analysis of gaze behaviour in collaborative assembly, and investigation of illumination effects on ocular workload markers. No identifiable video data are included; all signals are fully anonymized. The dataset includes a rich set of ocular features derived from the Meta ARIA smart-glasses recordings. These include pupil diameter (PD), fixation-based features (count, duration), and saccade-based metrics (count, amplitude, velocity). Additional higher-level gaze-behavior descriptors are provided, such as Gaze Transition Entropy (GTE), which quantifies the randomness of gaze shifts between task-relevant areas, and the Fixation Dispersion Index (FDI), which measures the spatial spread of fixation points and reflects how focused or exploratory the operator’s visual strategy is. Together, these features capture both low-level oculomotor dynamics and higher-level attentional allocation during human–robot collaboration tasks.