microsoft hololens
Recently Published Documents


TOTAL DOCUMENTS

173
(FIVE YEARS 120)

H-INDEX

11
(FIVE YEARS 4)

2022 ◽  
Author(s):  
Daniar Estu Widiyanti ◽  
Krisma Asmoro ◽  
Soo Young Shin

Ground control station (GCS) is a system for controlling and monitoring unmanned aerial vehicle (UAV). In current GCS, the device used are considered as complex environment. This paper proposes a video streaming and speech command control for supporting mixed reality based UAV GCS using Microsoft HoloLens. Video streaming will inform the UAV view and transmit the raw video to the HoloLens, while the HoloLens steers the UAV based on the displayed UAV field of view (FoV). Using the HoloLens Mixed Reality Tool-Kit (MRTK) speech input, UAV speech control from the HoloLens was successfully implemented. Finally, experimental results based on video streaming and speech command calculation of the throughput, round-time trip, latency and speech accuracy tests are discussed to demonstrate the feasibility of the proposed scheme.


2022 ◽  
Author(s):  
Daniar Estu Widiyanti ◽  
Krisma Asmoro ◽  
Soo Young Shin

Ground control station (GCS) is a system for controlling and monitoring unmanned aerial vehicle (UAV). In current GCS, the device used are considered as complex environment. This paper proposes a video streaming and speech command control for supporting mixed reality based UAV GCS using Microsoft HoloLens. Video streaming will inform the UAV view and transmit the raw video to the HoloLens, while the HoloLens steers the UAV based on the displayed UAV field of view (FoV). Using the HoloLens Mixed Reality Tool-Kit (MRTK) speech input, UAV speech control from the HoloLens was successfully implemented. Finally, experimental results based on video streaming and speech command calculation of the throughput, round-time trip, latency and speech accuracy tests are discussed to demonstrate the feasibility of the proposed scheme.


Technologies ◽  
2022 ◽  
Vol 10 (1) ◽  
pp. 4
Author(s):  
Stephanie Arévalo Arboleda ◽  
Marvin Becker ◽  
Jens Gerken

Hands-free robot teleoperation and augmented reality have the potential to create an inclusive environment for people with motor disabilities. It may allow them to teleoperate robotic arms to manipulate objects. However, the experiences evoked by the same teleoperation concept and augmented reality can vary significantly for people with motor disabilities compared to those without disabilities. In this paper, we report the experiences of Miss L., a person with multiple sclerosis, when teleoperating a robotic arm in a hands-free multimodal manner using a virtual menu and visual hints presented through the Microsoft HoloLens 2. We discuss our findings and compare her experiences to those of people without disabilities using the same teleoperation concept. Additionally, we present three learning points from comparing these experiences: a re-evaluation of the metrics used to measure performance, being aware of the bias, and considering variability in abilities, which evokes different experiences. We consider these learning points can be extrapolated to carrying human–robot interaction evaluations with mixed groups of participants with and without disabilities.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 256
Author(s):  
Sebastian Kapp ◽  
Frederik Lauer ◽  
Fabian Beil ◽  
Carl C. Rheinländer ◽  
Norbert Wehn ◽  
...  

With the recent increase in the use of augmented reality (AR) in educational laboratory settings, there is a need for new intelligent sensor systems capturing all aspects of the real environment. We present a smart sensor system meeting these requirements for STEM (science, technology, engineering, and mathematics) experiments in electrical circuits. The system consists of custom experiment boxes and cables combined with an application for the Microsoft HoloLens 2, which creates an AR experiment environment. The boxes combine sensors for measuring the electrical voltage and current at the integrated electrical components as well as a reconstruction of the currently constructed electrical circuit and the position of the sensor box on a table. Combing these data, the AR application visualizes the measurement data spatially and temporally coherent to the real experiment boxes, thus fulfilling demands derived from traditional multimedia learning theory. Following an evaluation of the accuracy and precision of the presented sensors, the usability of the system was evaluated with n=20 pupils in a German high school. In this evaluation, the usability of the system was rated with a system usability score of 94 out of 100.


2021 ◽  
Vol 26 (2) ◽  
pp. 87-116
Author(s):  
Ang Kouch Keang ◽  
Kriengsak Panuwatwanich ◽  
Pakawat Sancharoen ◽  
Somporn Sahachaisaree

This research introduces an augmented reality (AR)-based approach for the onsite inspection of expressway structures by developing an AR application, namely HoloSpector, deployed on the first generation of the Microsoft HoloLens headset. It was tested by a focus group of 10 postgraduate students, followed by three inspectors from the Expressway Authority of Thailand (EXAT), to investigate the practical applicability of the application. A questionnaire was employed as a research tool for measurement and assessment of the application and the HoloLens in this study. The results of this study indicated that the developed digital approach was satisfactory, easy to use and learn, useful, user-friendly and practical for EXAT expressway inspections; the users also intended to use it. Compared to the conventional approach, the current data communication and management could be significantly improved; this digital approach has the potential to help save resources, time and cost and increase work productivity.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Nicole Wake ◽  
Andrew B. Rosenkrantz ◽  
William C. Huang ◽  
James S. Wysock ◽  
Samir S. Taneja ◽  
...  

AbstractAugmented reality (AR) and virtual reality (VR) are burgeoning technologies that have the potential to greatly enhance patient care. Visualizing patient-specific three-dimensional (3D) imaging data in these enhanced virtual environments may improve surgeons’ understanding of anatomy and surgical pathology, thereby allowing for improved surgical planning, superior intra-operative guidance, and ultimately improved patient care. It is important that radiologists are familiar with these technologies, especially since the number of institutions utilizing VR and AR is increasing. This article gives an overview of AR and VR and describes the workflow required to create anatomical 3D models for use in AR using the Microsoft HoloLens device. Case examples in urologic oncology (prostate cancer and renal cancer) are provided which depict how AR has been used to guide surgery at our institution.


2021 ◽  
Vol 5 (11) ◽  
pp. 66
Author(s):  
Michael Chan ◽  
Alvaro Uribe-Quevedo ◽  
Bill Kapralos ◽  
Michael Jenkin ◽  
Norman Jaimes ◽  
...  

Direct ophthalmoscopy (DO) is a medical procedure whereby a health professional, using a direct ophthalmoscope, examines the eye fundus. DO skills are in decline due to the use of interactive diagnostic equipment and insufficient practice with the direct ophthalmoscope. To address the loss of DO skills, physical and computer-based simulators have been developed to offer additional training. Among the computer-based simulations, virtual and augmented reality (VR and AR, respectively) allow simulated immersive and interactive scenarios with eye fundus conditions that are difficult to replicate in the classroom. VR and AR require employing 3D user interfaces (3DUIs) to perform the virtual eye examination. Using a combination of a between-subjects and within-subjects paradigm with two groups of five participants, this paper builds upon a previous preliminary usability study that compared the use of the HTC Vive controller, the Valve Index controller, and the Microsoft HoloLens 1 hand gesticulation interaction methods when performing a virtual direct ophthalmoscopy eye examination. The work described in this paper extends our prior work by considering the interactions with the Oculus Quest controller and Oculus Quest hand-tracking system to perform a virtual direct ophthalmoscopy eye examination while allowing us to compare these methods without our prior interaction techniques. Ultimately, this helps us develop a greater understanding of usability effects for virtual DO examinations and virtual reality in general. Although the number of participants was limited, n = 5 for Stage 1 (including the HTC Vive controller, the Valve Index controller, and the Microsoft HoloLens hand gesticulations), and n = 13 for Stage 2 (including the Oculus Quest controller and the Oculus Quest hand tracking), given the COVID-19 restrictions, our initial results comparing VR and AR 3D user interactions for direct ophthalmoscopy are consistent with our previous preliminary study where the physical controllers resulted in higher usability scores, while the Oculus Quest’s more accurate hand motion capture resulted in higher usability when compared to the Microsoft HoloLens hand gesticulation.


Author(s):  
Martin Weinmann ◽  
Sven Wursthorn ◽  
Michael Weinmann ◽  
Patrick Hübner

AbstractThe Microsoft HoloLens is a head-worn mobile augmented reality device. It allows a real-time 3D mapping of its direct environment and a self-localisation within the acquired 3D data. Both aspects are essential for robustly augmenting the local environment around the user with virtual contents and for the robust interaction of the user with virtual objects. Although not primarily designed as an indoor mapping device, the Microsoft HoloLens has a high potential for an efficient and comfortable mapping of both room-scale and building-scale indoor environments. In this paper, we provide a survey on the capabilities of the Microsoft HoloLens (Version 1) for the efficient 3D mapping and modelling of indoor scenes. More specifically, we focus on its capabilities regarding the localisation (in terms of pose estimation) within indoor environments and the spatial mapping of indoor environments. While the Microsoft HoloLens can certainly not compete in providing highly accurate 3D data like laser scanners, we demonstrate that the acquired data provides sufficient accuracy for a subsequent standard rule-based reconstruction of a semantically enriched and topologically correct model of an indoor scene from the acquired data. Furthermore, we provide a discussion with respect to the robustness of standard handcrafted geometric features extracted from data acquired with the Microsoft HoloLens and typically used for a subsequent learning-based semantic segmentation.


2021 ◽  
Vol 11 (20) ◽  
pp. 9480
Author(s):  
Xinyi Tu ◽  
Juuso Autiosalo ◽  
Adnane Jadid ◽  
Kari Tammi ◽  
Gudrun Klinker

Digital twin technology empowers the digital transformation of the industrial world with an increasing amount of data, which meanwhile creates a challenging context for designing a human–machine interface (HMI) for operating machines. This work aims at creating an HMI for digital twin based services. With an industrial crane platform as a case study, we presented a mixed reality (MR) application running on a Microsoft HoloLens 1 device. The application, consisting of visualization, interaction, communication, and registration modules, allowed crane operators to both monitor the crane status and control its movement through interactive holograms and bi-directional data communication, with enhanced mobility thanks to spatial registration and tracking of the MR environment. The prototype was quantitatively evaluated regarding the control accuracy in 20 measurements following a step-by-step protocol that we defined to standardize the measurement procedure. The results suggested that the differences between the target and actual positions were within the 10 cm range in three dimensions, which were considered sufficiently small regarding the typical crane operation use case of logistics purposes and could be improved with the adoption of robust registration and tracking techniques in our future work.


Sign in / Sign up

Export Citation Format

Share Document