Modeling Workload for Target Detection from a Moving Vehicle With a Head-Mounted Display and Sound Localization

2002 ◽  
Author(s):  
Christopher C. Smyth
Author(s):  
James Brooks ◽  
Riley Lodge ◽  
Daniel White

The use of micro-unmanned aerial vehicles (UAVs) in military operations is rapidly increasing. However, limitations in the design of their human-machine interfaces (HMI) can limit their effectiveness. We propose that presenting the HMI of a micro-UAV through a head-mounted display (HMD) device is a viable alternative to using a flat screen display, however, factors such as simulator sickness and discomfort may reduce their usability. The present experiment compared participants’ target detection performance, usability ratings, and levels of simulator sickness when using either a HMD or a flat screen display in a micro-UAV simulation. Overall, there was no significant difference in performance between the two display conditions. However, participants reported significantly higher levels of mental workload, physical discomfort, and simulator sickness when using the HMD. Further, previous experience with virtual reality devices or video games did not reduce the levels of mental workload or simulator sickness experienced during the task. The results demonstrate that, at present, HMDs may not be suitable display devices for performing visual search tasks whilst flying micro-UAVs in urban environments.


2020 ◽  
Author(s):  
V. Gaveau ◽  
A. Coudert ◽  
R. Salemme ◽  
E. Koun ◽  
C. Desoche ◽  
...  

AbstractIn everyday life, localizing a sound source in free-field entails more than the sole extraction of monaural and binaural auditory cues to define its location in the three-dimensions (azimuth, elevation and distance). In spatial hearing, we also take into account all the available visual information (e.g., cues to sound position, cues to the structure of the environment), and we resolve perceptual ambiguities through active listening behavior, exploring the auditory environment with head or/and body movements. Here we introduce a novel approach to sound localization in 3D named SPHERE (European patent n° WO2017203028A1), which exploits a commercially available Virtual Reality Head-mounted display system with real-time kinematic tracking to combine all of these elements (controlled positioning of a real sound source and recording of participants’ responses in 3D, controlled visual stimulations and active listening behavior). We prove that SPHERE allows accurate sampling of the 3D spatial hearing abilities of normal hearing adults, and it allowed detecting and quantifying the contribution of active listening. Specifically, comparing static vs. free head-motion during sound emission we found an improvement of sound localization accuracy and precisions. By combining visual virtual reality, real-time kinematic tracking and real-sound delivery we have achieved a novel approach to the study of spatial hearing, with the potentials to capture real-life behaviors in laboratory conditions. Furthermore, our new approach also paves the way for clinical and industrial applications that will leverage the full potentials of active listening and multisensory stimulation intrinsic to the SPHERE approach for the purpose rehabilitation and product assessment.


2005 ◽  
Vol 19 (3) ◽  
pp. 216-231 ◽  
Author(s):  
Albertus A. Wijers ◽  
Maarten A.S. Boksem

Abstract. We recorded event-related potentials in an illusory conjunction task, in which subjects were cued on each trial to search for a particular colored letter in a subsequently presented test array, consisting of three different letters in three different colors. In a proportion of trials the target letter was present and in other trials none of the relevant features were present. In still other trials one of the features (color or letter identity) were present or both features were present but not combined in the same display element. When relevant features were present this resulted in an early posterior selection negativity (SN) and a frontal selection positivity (FSP). When a target was presented, this resulted in a FSP that was enhanced after 250 ms as compared to when both relevant features were present but not combined in the same display element. This suggests that this effect reflects an extra process of attending to both features bound to the same object. There were no differences between the ERPs in feature error and conjunction error trials, contrary to the idea that these two types of errors are due to different (perceptual and attentional) mechanisms. The P300 in conjunction error trials was much reduced relative to the P300 in correct target detection trials. A similar, error-related negativity-like component was visible in the response-locked averages in correct target detection trials, in feature error trials, and in conjunction error trials. Dipole modeling of this component resulted in a source in a deep medial-frontal location. These results suggested that this type of task induces a high level of response conflict, in which decision-related processes may play a major role.


Sign in / Sign up

Export Citation Format

Share Document