A calibration method for optical see-through head-mounted displays with a depth camera

Author(s):  
Hanseul Jun ◽  
Gunhee Kim
2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Yea Som Lee ◽  
Bong-Soo Sohn

3D maps such as Google Earth and Apple Maps (3D mode), in which users can see and navigate in 3D models of real worlds, are widely available in current mobile and desktop environments. Users usually use a monitor for display and a keyboard/mouse for interaction. Head-mounted displays (HMDs) are currently attracting great attention from industry and consumers because they can provide an immersive virtual reality (VR) experience at an affordable cost. However, conventional keyboard and mouse interfaces decrease the level of immersion because the manipulation method does not resemble actual actions in reality, which often makes the traditional interface method inappropriate for the navigation of 3D maps in virtual environments. From this motivation, we design immersive gesture interfaces for the navigation of 3D maps which are suitable for HMD-based virtual environments. We also describe a simple algorithm to capture and recognize the gestures in real-time using a Kinect depth camera. We evaluated the usability of the proposed gesture interfaces and compared them with conventional keyboard and mouse-based interfaces. Results of the user study indicate that our gesture interfaces are preferable for obtaining a high level of immersion and fun in HMD-based virtual environments.


1996 ◽  
Vol 5 (1) ◽  
pp. 122-135 ◽  
Author(s):  
Takashi Oishi ◽  
Susumu Tachi

See-through head-mounted displays (STHMDs), which superimpose the virtual environment generated by computer graphics (CG) on the real world, are expected to be able to vividly display various simulations and designs by using both the real environment and the virtual environment around us. However, we must ensure that the virtual environment is superimposed exactly on the real environment because both environments are visible. Disagreement in matching locations and size between real and virtual objects is likely to occur between the world coordinates of the real environment where the STHMD user actually exists and those of the virtual environment described as parameters of CG. This disagreement directly causes displacement of locations where virtual objects are superimposed. The STHMD must be calibrated so that the virtual environment is superimposed properly. Among the causes of such errors, we focus both on systematic errors of projection transformation parameters caused in manufacturing and differences between actual and supposed location of user's eye on STHMD when in use, and propose a calibration method to eliminate these effects. In the calibration method, the virtual cursor drawn in the virtual environment is directly fitted onto targets in the real environment. Based on the result of fitting, the least-squares method identifies values of the parameters that minimize differences between locations of the virtual cursor in the virtual environment and targets in the real environment. After we describe the calibration methods, we also report the result of this application to the STHMD that we have made. The result is accurate enough to prove the effectiveness of the calibration methods.


Computers ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 58
Author(s):  
Dennis Reimer ◽  
Iana Podkosova ◽  
Daniel Scherzer ◽  
Hannes Kaufmann

In colocated multi-user Virtual Reality applications, relative user positions in the virtual environment need to match their relative positions in the physical tracking space. A mismatch between virtual and real relative user positions might lead to harmful events such as physical user collisions. This paper examines three calibration methods that enable colocated Virtual Reality scenarios for SLAM-tracked head-mounted displays without the need for an external tracking system. Two of these methods—fixed-point calibration and marked-based calibration—have been described in previous research; the third method that uses hand tracking capabilities of head-mounted displays is novel. We evaluated the accuracy of these three methods in an experimental procedure with two colocated Oculus Quest devices. The results of the evaluation show that our novel hand tracking-based calibration method provides better accuracy and consistency while at the same time being easy to execute. The paper further discusses the potential of all evaluated calibration methods.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3885
Author(s):  
Jaeho Lee ◽  
Hyunsoo Shin ◽  
Sungon Lee

In a 3D scanning system, using a camera and a line laser, it is critical to obtain the exact geometrical relationship between the camera and laser for precise 3D reconstruction. With existing depth cameras, it is difficult to scan a large object or multiple objects in a wide area because only a limited area can be scanned at a time. We developed a 3D scanning system with a rotating line laser and wide-angle camera for large-area reconstruction. To obtain 3D information of an object using a rotating line laser, we must be aware of the plane of the line laser with respect to the camera coordinates at every rotating angle. This is done by estimating the rotation axis during calibration and then by rotating the laser at a predefined angle. Therefore, accurate calibration is crucial for 3D reconstruction. In this study, we propose a calibration method to estimate the geometrical relationship between the rotation axis of the line laser and the camera. Using the proposed method, we could accurately estimate the center of a cone or cylinder shape generated while the line laser was rotating. A simulation study was conducted to evaluate the accuracy of the calibration. In the experiment, we compared the results of the 3D reconstruction using our system and a commercial depth camera. The results show that the precision of our system is approximately 65% higher for plane reconstruction, and the scanning quality is also much better than that of the depth camera.


2011 ◽  
Vol 199 (2) ◽  
pp. 328-335 ◽  
Author(s):  
Stuart J. Gilson ◽  
Andrew W. Fitzgibbon ◽  
Andrew Glennerster

Sign in / Sign up

Export Citation Format

Share Document