Multi-Modal User Modeling for Task Guidance (MUTMG): A Dataset for Real-Time Assistance with Stress and Interruption Dynamics
Description
We introduce the Multi-Modal User Modeling for Task Guidance (MUMTG) to support the development of AI models for such purposes. The dataset is created through human-subjects studies with users performing search and assembly procedures with help from virtual instructions provided by either a head-worn AR headset or a laptop screen. The data-collection study uses a game-like scenario to guide participants through six guided tasks that vary in difficulty. Within each group, we manipulated task duration and induced several stress triggers to increase the task cognitive demand for three tasks. The dataset includes physiological data such as electrodermal activity, temperature, heart rate, pupil dilation, and gaze. We also collected subjective self-report ratings regarding task workload and emotional responses after each task. We offer this rich dataset as a valuable resource to facilitate the development of user models for task guidance in highly demanding contexts.