scholarly journals Spatial Sound in a 3D Virtual Environment: All Bark and No Bite?

2021 ◽  
Vol 5 (4) ◽  
pp. 79
Author(s):  
Radha Nila Meghanathan ◽  
Patrick Ruediger-Flore ◽  
Felix Hekele ◽  
Jan Spilski ◽  
Achim Ebert ◽  
...  

Although the focus of Virtual Reality (VR) lies predominantly on the visual world, acoustic components enhance the functionality of a 3D environment. To study the interaction between visual and auditory modalities in a 3D environment, we investigated the effect of auditory cues on visual searches in 3D virtual environments with both visual and auditory noise. In an experiment, we asked participants to detect visual targets in a 360° video in conditions with and without environmental noise. Auditory cues indicating the target location were either absent or one of simple stereo or binaural audio, both of which assisted sound localization. To investigate the efficacy of these cues in distracting environments, we measured participant performance using a VR headset with an eye tracker. We found that the binaural cue outperformed both stereo and no auditory cues in terms of target detection irrespective of the environmental noise. We used two eye movement measures and two physiological measures to evaluate task dynamics and mental effort. We found that the absence of a cue increased target search duration and target search path, measured as time to fixation and gaze trajectory lengths, respectively. Our physiological measures of blink rate and pupil size showed no difference between the different stadium and cue conditions. Overall, our study provides evidence for the utility of binaural audio in a realistic, noisy and virtual environment for performing a target detection task, which is a crucial part of everyday behaviour—finding someone in a crowd.

Sensors ◽  
2018 ◽  
Vol 18 (8) ◽  
pp. 2535 ◽  
Author(s):  
Il-Kyu Ha ◽  
You-Ze Cho

Finding a target quickly is one of the most important tasks in drone operations. In particular, rapid target detection is a critical issue for tasks such as finding rescue victims during the golden period, environmental monitoring, locating military facilities, and monitoring natural disasters. Therefore, in this study, an improved hierarchical probabilistic target search algorithm based on the collaboration of drones at different altitudes is proposed. This is a method for reducing the search time and search distance by improving the information transfer methods between high-altitude and low-altitude drones. Specifically, to improve the speed of target detection, a high-altitude drone first performs a search of a wide area. Then, when the probability of existence of the target is higher than a certain threshold, the search information is transmitted to a low-altitude drone which then performs a more detailed search in the identified area. This method takes full advantage of fast searching capabilities at high altitudes. In other words, it reduces the total time and travel distance required for searching by quickly searching a wide search area. Several drone collaboration scenarios that can be performed by two drones at different altitudes are described and compared to the proposed algorithm. Through simulations, the performances of the proposed algorithm and the cooperation scenarios are analyzed. It is demonstrated that methods utilizing hierarchical searches with drones are comparatively excellent and that the proposed algorithm is approximately 13% more effective than a previous method and much better compared to other scenarios.


Author(s):  
Eppili Jaya ◽  
B. T. Krishna

Target detection is one of the important subfields in the research of Synthetic Aperture Radar (SAR). It faces several challenges, due to the stationary objects, leading to the presence of scatter signal. Many researchers have succeeded on target detection, and this work introduces an approach for moving target detection in SAR. The newly developed scheme named Adaptive Particle Fuzzy System for Moving Target Detection (APFS-MTD) as the scheme utilizes the particle swarm optimization (PSO), adaptive, and fuzzy linguistic rules in APFS for identifying the target location. Initially, the received signals from the SAR are fed through the Generalized Radon-Fourier Transform (GRFT), Fractional Fourier Transform (FrFT), and matched filter to calculate the correlation using Ambiguity Function (AF). Then, the location of target is identified in the search space and is forwarded to the proposed APFS. The proposed APFS is the modification of standard Adaptive genetic fuzzy system using PSO. The performance of the MTD based on APFS is evaluated based on detection time, missed target rate, and Mean Square Error (MSE). The developed method achieves the minimal detection time of 4.13[Formula: see text]s, minimal MSE of 677.19, and the minimal moving target rate of 0.145, respectively.


2019 ◽  
Vol 6 ◽  
pp. 205566831984130
Author(s):  
Nahal Norouzi ◽  
Luke Bölling ◽  
Gerd Bruder ◽  
Greg Welch

Introduction: A large body of research in the field of virtual reality is focused on making user interfaces more natural and intuitive by leveraging natural body movements to explore a virtual environment. For example, head-tracked user interfaces allow users to naturally look around a virtual space by moving their head. However, such approaches may not be appropriate for users with temporary or permanent limitations of their head movement. Methods: In this paper, we present techniques that allow these users to get virtual benefits from a reduced range of physical movements. Specifically, we describe two techniques that augment virtual rotations relative to physical movement thresholds. Results: We describe how each of the two techniques can be implemented with either a head tracker or an eye tracker, e.g. in cases when no physical head rotations are possible. Conclusions: We discuss their differences and limitations and we provide guidelines for the practical use of such augmented user interfaces.


2010 ◽  
Vol 19 (1) ◽  
pp. 12-24 ◽  
Author(s):  
Michael Donnerer ◽  
Anthony Steed

Brain–computer interfaces (BCIs) provide a novel form of human–computer interaction. The purpose of these systems is to aid disabled people by affording them the possibility of communication and environment control. In this study, we present experiments using a P300 based BCI in a fully immersive virtual environment (IVE). P300 BCIs depend on presenting several stimuli to the user. We propose two ways of embedding the stimuli in the virtual environment: one that uses 3D objects as targets, and a second that uses a virtual overlay. Both ways have been shown to work effectively with no significant difference in selection accuracy. The results suggest that P300 BCIs can be used successfully in a 3D environment, and this suggests some novel ways of using BCIs in real world environments.


Author(s):  
John P. McIntire ◽  
Paul R. Havig ◽  
Scott N. J. Watamaniuk ◽  
Robert H. Gilkey

1996 ◽  
Vol 5 (3) ◽  
pp. 290-301 ◽  
Author(s):  
Claudia Hendrix ◽  
Woodrow Barfield

Two studies were performed to investigate the sense of presence within stereoscopic virtual environments as a function of the addition or absence of auditory cues. The first study examined the presence or absence of spatialized sound, while the second study compared the use of nonspatialized sound to spatialized sound. Sixteen subjects were allowed to navigate freely throughout several virtual environments and for each virtual environment, their level of presence, the virtual world realism, and interactivity between the participant and virtual environment were evaluated using survey questions. The results indicated that the addition of spatialized sound significantly increased the sense of presence but not the realism of the virtual environment. Despite this outcome, the addition of a spatialized sound source significantly increased the realism with which the subjects interacted with the sound source, and significantly increased the sense that sounds emanated from specific locations within the virtual environment. The results suggest that, in the context of a navigation task, while presence in virtual environments can be improved by the addition of auditory cues, the perceived realism of a virtual environment may be influenced more by changes in the visual rather than auditory display media. Implications of these results for presence within auditory virtual environments are discussed.


2015 ◽  
Author(s):  
Gerhard Marquart ◽  
Joost de Winter

Pupillometry is a promising method for assessing mental workload and could be helpful in the optimization of systems that involve human-computer interaction. The present study focuses on replicating the studies by Ahern (1978) and Klingner (2010), which found that for three levels of difficulty of mental multiplications, the more difficult multiplications yielded larger dilations of the pupil. Using a remote eye tracker, our research expands upon these two previous studies by statistically testing for each 1.5 s interval of the calculation period (1) the mean absolute pupil diameter (MPD), (2) the mean pupil diameter change (MPDC) with respect to the pupil diameter during the pre-stimulus accommodation period, and (3) the mean pupil diameter change rate (MPDCR). An additional novelty of our research is that we compared the pupil diameter measure with a self-report measure of workload, the NASA Task Load Index (NASA-TLX), and with the mean blink rate (MBR). The results showed that the findings of Ahern and Klingner were replicated, and that the MPD and MPDC discriminated just as well between the lowest and highest difficulty levels as did the NASA-TLX. The MBR, on the other hand, did not interpretably differentiate between the difficulty levels. Moderate to strong correlations were found between the MPDC and the proportion of incorrect responses, indicating that the MPDC was higher for participants with a poorer performance. For practical applications, validity could be improved by combining pupillometry with other physiological techniques.


2020 ◽  
Vol 8 (2) ◽  
pp. 100-106
Author(s):  
Dmitry A. Utev ◽  
Irina V. Borisova ◽  
Valery P. Yushchenko

The problem of stability of object detection in images using proximity measures is considered. The purpose of the work is to determine the degree of invariance of various proximity measures for detecting objects by reference when rotating and zooming the scanned image. The proximity measure that is most resistant to these geometric transformations of the image is found out. The proximity measures are analyzed: correlation, comparison, Chamfer Distance. The target location is based on the coordinates of the extremum of the target function. Modeling is performed in the Matlab software package. A database of thirty television images was created to test the proximity measures. Test images contain the required objects and imitations of both complex and simple backgrounds. It was determined that all considered proximity measures steadily determine the target with small turns and scaling factors.


Sign in / Sign up

Export Citation Format

Share Document