The Interaction Efficiency of Different Visual Areas on a Virtual Reality Device Screen under Standing versus Sitting Posture

Published: 31 May 2024| Version 1 | DOI: 10.17632/4bc37v7n9r.1
Contributor:
景宣

Description

User posture can exert an impact on cognition. Virtual reality (VR) devices extend the traditional two-dimensional interface into a three-dimensional space, necessitating some changes in body posture for users to interact between them. This study evaluated the interaction efficiency between users in standing versus sitting postures and the four corners of interface in VR environments. A total of 35 participants joined the experiment, in which the time the participants spent on searching and selecting targets under different conditions was measured. With a two-way analysis of variance (ANOVA), the results indicated that the interaction efficiency of participants in a sitting posture was generally higher than that in a standing posture. When searching for targets on the left side, participants spent less time in both standing and sitting postures than other areas. Specifically, searching for targets on the top left corner took less time in a standing posture than other areas, while searching for targets on the bottom left area took less time in a sitting posture. The results suggested that posture also influenced cognitive performance in VR environments. Moreover, the high interaction efficiency could be achieved when the interface was placed at the top left area in a standing posture and at the bottom left area in a sitting posture. These findings could provide insights for VR developers on interaction design, helping optimal interface design according to different user postures to ensure expected interaction efficiency and user experience.

Files

Steps to reproduce

During the experiment, participants needed to aim the laser at the character they were to select, and then use their non-dominant hand to press the physical button on the joystick to complete the selection operation. They needed to search and select as quickly as possible, and the interface did not automatically move forward unless the participant completed the task. To ensure that participants started each task from the same starting point, firstly, they were asked to control the joystick to select the blue block located in the central area when each task started. They could search and select the target character only after the blue block disappeared. The system then measured the time from the initial selection of the blue block to the selection of the target character. The system feedbacked audibly to the participant to tell them that they had clicked correctly and should proceed to the next task when the selected character was the same as the target one. Otherwise, the system would make a sound to prompt a wrong selection, and the participant needed to continue selecting characters until they made the correct selection. After each searching and selecting, a cross appeared in the center of the screen for 1s to eliminate the impact of current task on the next. Participants were required to join the experiment in both standing and sitting postures. They were tested in a random order. Before the test started, the experimenter introduced the basic requirements to each participant. The participant was guided to sit upright in the designated position, with eyes looking straight ahead, then wore VR glasses and adjusted until the objects within the field of view were clear. Afterwards, the participant learned the basic operation introductions. After learning the basic operating method, a set of five tasks were used as practicing before the formal experiment, where numbers were used in place of letters to avoid affecting the formal experiment. After the practicing, the participant moved to the sitting round of the formal experiment, where they were firstly required to complete 50 letters’ searching and selecting tasks in a sitting posture. After 50 tasks in the sitting round, the participant was reminded by text and sound indicating that this round was completed and he/she could take off the VR glasses to have a five-minutes’ rest. After the break, the standing round commenced, where the participant was required to stand at a designated position and re-fit their VR glasses, continuing with 50 letters’ searching and selecting tasks. Upon completion of the 50 standing task rounds, participants were reminded by text and sound indicating that the whole experiment ended. Lastly, a brief interview was conducted with the participants to understand their feelings when interacting with different areas in different postures. The entire experiment lasted for about 20 minutes.

Institutions

Tianjin University

Categories

Virtual Reality, Industrial Design, Human-Computer Interaction

Licence