Hybrid hand tracking system

Author(s):  
Jing-Ming Guo ◽  
Hoang-Son Nguyen
2021 ◽  
Author(s):  
Tianyun Yuan ◽  
Yu (Wolf) Song ◽  
Gerald A. Kraan ◽  
Richard H. M. Goossens

Abstract Measuring the motion of human hand joints is a challenging task due to the high number of DOFs. In this study, we proposed a low-cost hand tracking system built on action cameras and ArUco markers to measure finger joint rotation angles. The lens distortion of each camera was corrected first via intra-calibration and the videos of different cameras were aligned to the reference camera using a dynamic time warping based method. Two methods were proposed and implemented for extracting the rotation angles of finger joints: one is based on the 3D positions of the markers via inter-calibration between cameras, named pos-based method; the other one is based on the relative marker orientation information from individual cameras, named rot-based method. An experiment was conducted to evaluate the effectiveness of the proposed system. The right hand of a volunteer was included in this practical study, where the movement of the fingers was recorded and the finger rotation angles were calculated with the two proposed methods, respectively. The results indicated that although using the rot-based method may collect less data than using the pos-based method, it was more stable and reliable. Therefore, the rot-based method is recommended for measuring finger joint rotation in practical setups.


2021 ◽  
pp. 305-315
Author(s):  
Giuseppe Placidi ◽  
Giovanni De Gasperis ◽  
Filippo Mignosi ◽  
Matteo Polsinelli ◽  
Matteo Spezialetti

2021 ◽  
Vol 5 (11) ◽  
pp. 66
Author(s):  
Michael Chan ◽  
Alvaro Uribe-Quevedo ◽  
Bill Kapralos ◽  
Michael Jenkin ◽  
Norman Jaimes ◽  
...  

Direct ophthalmoscopy (DO) is a medical procedure whereby a health professional, using a direct ophthalmoscope, examines the eye fundus. DO skills are in decline due to the use of interactive diagnostic equipment and insufficient practice with the direct ophthalmoscope. To address the loss of DO skills, physical and computer-based simulators have been developed to offer additional training. Among the computer-based simulations, virtual and augmented reality (VR and AR, respectively) allow simulated immersive and interactive scenarios with eye fundus conditions that are difficult to replicate in the classroom. VR and AR require employing 3D user interfaces (3DUIs) to perform the virtual eye examination. Using a combination of a between-subjects and within-subjects paradigm with two groups of five participants, this paper builds upon a previous preliminary usability study that compared the use of the HTC Vive controller, the Valve Index controller, and the Microsoft HoloLens 1 hand gesticulation interaction methods when performing a virtual direct ophthalmoscopy eye examination. The work described in this paper extends our prior work by considering the interactions with the Oculus Quest controller and Oculus Quest hand-tracking system to perform a virtual direct ophthalmoscopy eye examination while allowing us to compare these methods without our prior interaction techniques. Ultimately, this helps us develop a greater understanding of usability effects for virtual DO examinations and virtual reality in general. Although the number of participants was limited, n = 5 for Stage 1 (including the HTC Vive controller, the Valve Index controller, and the Microsoft HoloLens hand gesticulations), and n = 13 for Stage 2 (including the Oculus Quest controller and the Oculus Quest hand tracking), given the COVID-19 restrictions, our initial results comparing VR and AR 3D user interactions for direct ophthalmoscopy are consistent with our previous preliminary study where the physical controllers resulted in higher usability scores, while the Oculus Quest’s more accurate hand motion capture resulted in higher usability when compared to the Microsoft HoloLens hand gesticulation.


Author(s):  
Tianyun Yuan ◽  
Yu Song ◽  
Gerald A. Kraan ◽  
Richard HM Goossens

Abstract Measuring the motions of human hand joints is often a challenge due to the high number of degrees of freedom. In this study, we proposed a hand tracking system utilizing action cameras and ArUco markers to continuously measure the rotation angles of hand joints. Three methods were developed to estimate the joint rotation angles. The pos-based method transforms marker positions to a reference coordinate system (RCS) and extracts a hand skeleton to identify the rotation angles. Similarly, the orient-x-based method calculates the rotation angles from the transformed x-orientations of the detected markers in the RCS. In contrast, the orient-mat-based method first identifies the rotation angles in each camera coordinate system using the detected orientations, and then, synthesizes the results regarding each joint. Experiment results indicated that the repeatability errors with one camera regarding different marker sizes were around 2.64 to 27.56 degrees and 0.60 to 2.36 degrees using the marker positions and orientations respectively. When multiple cameras were employed to measure the joint rotation angles, the angles measured by using the three methods were comparable with that measured by a goniometer. Despite larger deviations occurred when using the pos-based method. Further analysis indicated that the results of using the orient-mat-based method can describe more types of joint rotations, and the effectiveness of this method was verified by capturing hand movements of several participants. Thus it is recommended for measuring joint rotation angles in practical setups.


Author(s):  
Serdar Tumkor ◽  
Sven K. Esche ◽  
Constantin Chassapis

When designing products, networked computers are increasingly used to facilitate the collaboration among team members from remote locations. Design visualization plays a critical role in understanding the design concepts shared by the design team members. CAD systems have 3D visualization capabilities that are designed to help users to understand complex structures easily and to design better products. However, 3D visualization on a 2D screen has limitations in communicating complex structures. Furthermore, gestures play a significant role in face-to-face communication but are missing in remote collaboration. Object manipulation in 3D using gestures in the real physical space without a cumbersome hand-held peripheral device may ease the visualization and understanding of the concept model. Today, peripheral devices for human-computer interfaces are not only becoming more advanced but also less expensive. Some of the game controllers are now equipped with depth cameras (RGB-D cameras) and have a great potential for complementing or even replacing the traditional keyboard and mouse interface with hand or body gestures. In this paper, new low-cost mixed reality devices for 3D user inputs and visual outputs are investigated and their possible uses with CAD systems in the process of collaborative design are discussed. In this study, a hand tracking system built based on two Kinect sensors has been used to track the position and orientation of the user’s hands. In this system, CAD models can be manipulated and disassembled using hand gestures. The new user interface provides for a user-friendly interaction between designer and CAD system.


Computers ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 58
Author(s):  
Dennis Reimer ◽  
Iana Podkosova ◽  
Daniel Scherzer ◽  
Hannes Kaufmann

In colocated multi-user Virtual Reality applications, relative user positions in the virtual environment need to match their relative positions in the physical tracking space. A mismatch between virtual and real relative user positions might lead to harmful events such as physical user collisions. This paper examines three calibration methods that enable colocated Virtual Reality scenarios for SLAM-tracked head-mounted displays without the need for an external tracking system. Two of these methods—fixed-point calibration and marked-based calibration—have been described in previous research; the third method that uses hand tracking capabilities of head-mounted displays is novel. We evaluated the accuracy of these three methods in an experimental procedure with two colocated Oculus Quest devices. The results of the evaluation show that our novel hand tracking-based calibration method provides better accuracy and consistency while at the same time being easy to execute. The paper further discusses the potential of all evaluated calibration methods.


2014 ◽  
Vol 2 (5) ◽  
pp. 238-248
Author(s):  
Mohd Shahrimie Mohd Asaari ◽  
Shahrel Azmin Suandi ◽  
Bakhtiar Affendi Rosdi

Author(s):  
N. Pretto ◽  
F. Poiesi

We present a virtual reality (VR) setup that enables multiple users to participate in collaborative virtual environments and interact via gestures. A collaborative VR session is established through a network of users that is composed of a server and a set of clients. The server manages the communication amongst clients and is created by one of the users. Each user’s VR setup consists of a Head Mounted Display (HMD) for immersive visualisation, a hand tracking system to interact with virtual objects and a single-hand joypad to move in the virtual environment. We use Google Cardboard as a HMD for the VR experience and a Leap Motion for hand tracking, thus making our solution low cost. We evaluate our VR setup though a forensics use case, where real-world objects pertaining to a simulated crime scene are included in a VR environment, acquired using a smartphone-based 3D reconstruction pipeline. Users can interact using virtual gesture-based tools such as pointers and rulers.


Sign in / Sign up

Export Citation Format

Share Document