Sound Field Projection System using Optical See-Through Head Mounted Display

2021 ◽  
Vol 263 (5) ◽  
pp. 1267-1274
Author(s):  
Atsuto Inoue ◽  
Wataru Teraoka ◽  
Yasuhiro Oikawa ◽  
Takahiro Satou ◽  
Yasuyuki Iwane ◽  
...  

There are various ways to grasp the spatial and temporal structures of sound field. Sound field visualization is an effective technique to understand spatial sound information. For example, acoustical holography, optical methods, and beam-forming have been proposed and studied. In recent years, augmented reality (AR) technology has rapidly developed and is now more familiar. Many sensors, display devices, and ICT technologies have been implemented in AR equipment, which enable interaction between real and virtual worlds. In this paper, we propose an AR display system, which displays the results obtained by the beam-forming method. The system consists of 16ch microphone array, real-time sound field visualization system and optical see-through head mounted display (OST-HMD). Real-time sound field visualization system analyses sound signals recorded by 16ch microphone array by beam-forming method. Processed sound pressures data are sent to OST-HMD by using transmission control protocol (TCP), and colormap is projected on real world. Settings property of real-time sound field visualization system can be changed by using virtual user interface (UI) and TCP. In addition, multi-users can experience the system by sharing sound pressures and settings property data. Using this system, users wearing OST-HMD can observe sound field information intuitively.

2016 ◽  
Vol 140 (4) ◽  
pp. 3195-3196
Author(s):  
Atsuto Inoue ◽  
Yusuke Ikeda ◽  
Kohei Yatabe ◽  
Yasuhiro Oikawa

Author(s):  
Johannes M. Arend ◽  
Tim Lübeck ◽  
Christoph Pörschmann

AbstractHigh-quality rendering of spatial sound fields in real-time is becoming increasingly important with the steadily growing interest in virtual and augmented reality technologies. Typically, a spherical microphone array (SMA) is used to capture a spatial sound field. The captured sound field can be reproduced over headphones in real-time using binaural rendering, virtually placing a single listener in the sound field. Common methods for binaural rendering first spatially encode the sound field by transforming it to the spherical harmonics domain and then decode the sound field binaurally by combining it with head-related transfer functions (HRTFs). However, these rendering methods are computationally demanding, especially for high-order SMAs, and require implementing quite sophisticated real-time signal processing. This paper presents a computationally more efficient method for real-time binaural rendering of SMA signals by linear filtering. The proposed method allows representing any common rendering chain as a set of precomputed finite impulse response filters, which are then applied to the SMA signals in real-time using fast convolution to produce the binaural signals. Results of the technical evaluation show that the presented approach is equivalent to conventional rendering methods while being computationally less demanding and easier to implement using any real-time convolution system. However, the lower computational complexity goes along with lower flexibility. On the one hand, encoding and decoding are no longer decoupled, and on the other hand, sound field transformations in the SH domain can no longer be performed. Consequently, in the proposed method, a filter set must be precomputed and stored for each possible head orientation of the listener, leading to higher memory requirements than the conventional methods. As such, the approach is particularly well suited for efficient real-time binaural rendering of SMA signals in a fixed setup where usually a limited range of head orientations is sufficient, such as live concert streaming or VR teleconferencing.


2019 ◽  
Vol 40 (1) ◽  
pp. 1-11 ◽  
Author(s):  
Atsuto Inoue ◽  
Yusuke Ikeda ◽  
Kohei Yatabe ◽  
Yasuhiro Oikawa

Author(s):  
Thomas Kersten ◽  
Daniel Drenkhan ◽  
Simon Deggim

AbstractTechnological advancements in the area of Virtual Reality (VR) in the past years have the potential to fundamentally impact our everyday lives. VR makes it possible to explore a digital world with a Head-Mounted Display (HMD) in an immersive, embodied way. In combination with current tools for 3D documentation, modelling and software for creating interactive virtual worlds, VR has the means to play an important role in the conservation and visualisation of cultural heritage (CH) for museums, educational institutions and other cultural areas. Corresponding game engines offer tools for interactive 3D visualisation of CH objects, which makes a new form of knowledge transfer possible with the direct participation of users in the virtual world. However, to ensure smooth and optimal real-time visualisation of the data in the HMD, VR applications should run at 90 frames per second. This frame rate is dependent on several criteria including the amount of data or number of dynamic objects. In this contribution, the performance of a VR application has been investigated using different digital 3D models of the fortress Al Zubarah in Qatar with various resolutions. We demonstrate the influence on real-time performance by the amount of data and the hardware equipment and that developers of VR applications should find a compromise between the amount of data and the available computer hardware, to guarantee a smooth real-time visualisation with approx. 90 fps (frames per second). Therefore, CAD models offer a better performance for real-time VR visualisation than meshed models due to the significant reduced data volume.


Author(s):  
F. Boehm ◽  
P. J. Schuler ◽  
R. Riepl ◽  
L. Schild ◽  
T. K. Hoffmann ◽  
...  

AbstractMicrovascular procedures require visual magnification of the surgical field, e.g. by a microscope. This can be accompanied by an unergonomic posture with musculoskeletal pain or long-term degenerative changes as the eye is bound to the ocular throughout the whole procedure. The presented study describes the advantages and drawbacks of a 3D exoscope camera system. The RoboticScope®-system (BHS Technologies®, Innsbruck, Austria) features a high-resolution 3D-camera that is placed over the surgical field and a head-mounted-display (HMD) that the camera pictures are transferred to. A motion sensor in the HMD allows for hands-free change of the exoscope position via head movements. For general evaluation of the system functions coronary artery anastomoses of ex-vivo pig hearts were performed. Second, the system was evaluated for anastomosis of a radial-forearm-free-flap in a clinical setting/in vivo. The system positioning was possible entirely hands-free using head movements. Camera control was intuitive; visualization of the operation site was adequate and independent from head or body position. Besides technical instructions of the providing company, there was no special surgical training of the surgeons or involved staff upfront performing the procedures necessary. An ergonomic assessment questionnaire showed a favorable ergonomic position in comparison to surgery with a microscope. The outcome of the operated patient was good. There were no intra- or postoperative complications. The exoscope facilitates a change of head and body position without losing focus of the operation site and an ergonomic working position. Repeated applications have to clarify if the system benefits in clinical routine.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 408
Author(s):  
Elicia L. S. Wong ◽  
Khuong Q. Vuong ◽  
Edith Chow

Nanozymes are advanced nanomaterials which mimic natural enzymes by exhibiting enzyme-like properties. As nanozymes offer better structural stability over their respective natural enzymes, they are ideal candidates for real-time and/or remote environmental pollutant monitoring and remediation. In this review, we classify nanozymes into four types depending on their enzyme-mimicking behaviour (active metal centre mimic, functional mimic, nanocomposite or 3D structural mimic) and offer mechanistic insights into the nature of their catalytic activity. Following this, we discuss the current environmental translation of nanozymes into a powerful sensing or remediation tool through inventive nano-architectural design of nanozymes and their transduction methodologies. Here, we focus on recent developments in nanozymes for the detection of heavy metal ions, pesticides and other organic pollutants, emphasising optical methods and a few electrochemical techniques. Strategies to remediate persistent organic pollutants such as pesticides, phenols, antibiotics and textile dyes are included. We conclude with a discussion on the practical deployment of these nanozymes in terms of their effectiveness, reusability, real-time in-field application, commercial production and regulatory considerations.


2021 ◽  
Vol 1 (1) ◽  
pp. 48-67
Author(s):  
Dylan Yamada-Rice

This article reports on one stage of a project that considered twenty 8–12-years-olds use of Virtual Reality (VR) for entertainment. The entire project considered this in relation to interaction and engagement, health and safety and how VR play fitted into children’s everyday home lives. The specific focus of this article is solely on children’s interaction and engagement with a range of VR content on both a low-end and high-end head mounted display (HMD). The data were analysed using novel multimodal methods that included stop-motion animation and graphic narratives to develop multimodal means for analysis within the context of VR. The data highlighted core design elements in VR content that promoted or inhibited children’s storytelling in virtual worlds. These are visual style, movement and sound which are described in relation to three core points of the user’s journey through the virtual story; (1) entering the virtual environment, (2) being in the virtual story world, and (3) affecting the story through interactive objects. The findings offer research-based design implications for the improvement of virtual content for children, specifically in relation to creating content that promotes creativity and storytelling, thereby extending the benefits that have previously been highlighted in the field of interactive storytelling with other digital media.


Sign in / Sign up

Export Citation Format

Share Document