Multi-Modal User Modeling for Task Guidance (MUTMG): A Dataset for Real-Time Assistance with Stress and Interruption Dynamics

Published: 1 February 2024| Version 1 | DOI: 10.17632/b7c2h6cbc6.1
Contributors:
, Benjamin Rheault, Ahmed Rageeb Ahsan, Brett Benda, Tyler Audino, Samuel Lonneman, Eric Ragan

Description

We introduce the Multi-Modal User Modeling for Task Guidance (MUMTG) to support the development of AI models for such purposes. The dataset is created through human-subjects studies with users performing search and assembly procedures with help from virtual instructions provided by either a head-worn AR headset or a laptop screen. The data-collection study uses a game-like scenario to guide participants through six guided tasks that vary in difficulty. Within each group, we manipulated task duration and induced several stress triggers to increase the task cognitive demand for three tasks. The dataset includes physiological data such as electrodermal activity, temperature, heart rate, pupil dilation, and gaze. We also collected subjective self-report ratings regarding task workload and emotional responses after each task. We offer this rich dataset as a valuable resource to facilitate the development of user models for task guidance in highly demanding contexts.

Files

Institutions

University of Florida

Categories

Artificial Intelligence, User Modeling, Human Centred Design, Human-Computer Interaction

Licence