hand tracking
Recently Published Documents


TOTAL DOCUMENTS

552
(FIVE YEARS 136)

H-INDEX

35
(FIVE YEARS 2)

Author(s):  
Chandan Kumar

Abstract: Computer vision is a process by which we can understand how the images and videos are stored and manipulated, also it helps in the process of retrieving data from either images or videos. Computer Vision is part of Artificial Intelligence. Computer-Vision plays a major role in Autonomous cars, Object detections, robotics, object tracking, etc. OpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. It comes with a highly improved deep learning (dnn ) module. This module now supports a number of deep learning frameworks, including Caffe, TensorFlow, and Torch/PyTorch. This does allow us to take our models trained using dedicated deep learning libraries/tools and then efficiently use them directly inside our OpenCV scripts. MediaPipe is a framework mainly used for building audio, video, or any time series data. With the help of the MediaPipe framework, we can build very impressive pipelines for different media processing functions like Multi-hand Tracking, Face Detection, Object Detection and Tracking, etc.


Author(s):  
Tianyun Yuan ◽  
Yu Song ◽  
Gerald A. Kraan ◽  
Richard HM Goossens

Abstract Measuring the motions of human hand joints is often a challenge due to the high number of degrees of freedom. In this study, we proposed a hand tracking system utilizing action cameras and ArUco markers to continuously measure the rotation angles of hand joints. Three methods were developed to estimate the joint rotation angles. The pos-based method transforms marker positions to a reference coordinate system (RCS) and extracts a hand skeleton to identify the rotation angles. Similarly, the orient-x-based method calculates the rotation angles from the transformed x-orientations of the detected markers in the RCS. In contrast, the orient-mat-based method first identifies the rotation angles in each camera coordinate system using the detected orientations, and then, synthesizes the results regarding each joint. Experiment results indicated that the repeatability errors with one camera regarding different marker sizes were around 2.64 to 27.56 degrees and 0.60 to 2.36 degrees using the marker positions and orientations respectively. When multiple cameras were employed to measure the joint rotation angles, the angles measured by using the three methods were comparable with that measured by a goniometer. Despite larger deviations occurred when using the pos-based method. Further analysis indicated that the results of using the orient-mat-based method can describe more types of joint rotations, and the effectiveness of this method was verified by capturing hand movements of several participants. Thus it is recommended for measuring joint rotation angles in practical setups.


2021 ◽  
Vol 9 (12) ◽  
pp. 227-231
Author(s):  
Deepak Tripathi ◽  
◽  
Aashi Srivastava ◽  

This review paper covers extensive research on the production of holograms through laser plasma interaction. The concept involves production of plasma trails through femtosecond laser pulse and capturing them through charged coupled device. This particular paper also revolves around the procedure and principle of Touchable holography. It lays emphasis on tactile display that is the primary requirement for touchable holography and also throws light on hand tracking and applications of the same.


Author(s):  
Hoa Tat Thang

Computers have become popular in recent years. The forms of human-computer interaction are increasingly diverse. In many cases, controlling the computer is not only through the mouse and keyboard, but humans must control the computer through body language and representation. For some people with physical disabilities, controlling the computer through hand movements is essential to help them interact with the computer. The field of simulation also needs these interactive applications. This paper studies a solution to build a hand tracking and gesture recognition system that allows cursor movement and corresponding actions with mouse and keyboard. The research team confirms that the system works stably, accurately and can control the computer instead of a conventional mouse and keyboard through the implementation and evaluation.


2021 ◽  
Author(s):  
◽  
Byron Mallett

<p>This thesis presents the design for a method of controlling music software for live performance by utilising virtual reality (VR) technologies. By analysing the performance methods of artists that use either physical or gestural methods for controlling music, it is apparent that physical limitations of musical input devices can hamper the creative process involved in authoring an interface for a performance. This thesis proposes the use of VR technologies as a central foundation for authoring a unique workspace where a performance interface can be both constructed and performed with. Through a number of design experiments using a variety of gestural input technologies, the relationship between a musical performer, interface, and audience was analysed. The final proposed design of a VR interface for musical performance focuses on providing the performer with objects that can be directly manipulated with physical gestures performed by touching virtual controls. By utilising the strengths provided by VR, a performer can learn how to effectively operate their performance environment through the use of spatial awareness provided by VR stereoscopic rendering and hand tracking, as well as allowing for the construction of unique interfaces that are not limited by physical hardware constraints. This thesis also presents a software framework for connecting together multiple musical devices within a single performance ecosystem that can all be directly controlled from a single VR space. The final outcome of this research is a shared musical environment that is designed to foster closer connections between an audience, a performer and a performance interface into a coherent and appealing experience for all.</p>


2021 ◽  
Author(s):  
◽  
Byron Mallett

<p>This thesis presents the design for a method of controlling music software for live performance by utilising virtual reality (VR) technologies. By analysing the performance methods of artists that use either physical or gestural methods for controlling music, it is apparent that physical limitations of musical input devices can hamper the creative process involved in authoring an interface for a performance. This thesis proposes the use of VR technologies as a central foundation for authoring a unique workspace where a performance interface can be both constructed and performed with. Through a number of design experiments using a variety of gestural input technologies, the relationship between a musical performer, interface, and audience was analysed. The final proposed design of a VR interface for musical performance focuses on providing the performer with objects that can be directly manipulated with physical gestures performed by touching virtual controls. By utilising the strengths provided by VR, a performer can learn how to effectively operate their performance environment through the use of spatial awareness provided by VR stereoscopic rendering and hand tracking, as well as allowing for the construction of unique interfaces that are not limited by physical hardware constraints. This thesis also presents a software framework for connecting together multiple musical devices within a single performance ecosystem that can all be directly controlled from a single VR space. The final outcome of this research is a shared musical environment that is designed to foster closer connections between an audience, a performer and a performance interface into a coherent and appealing experience for all.</p>


2021 ◽  
Author(s):  
Florian Kern ◽  
Thore Keser ◽  
Florian Niebling ◽  
Marc Erich Latoschik
Keyword(s):  

2021 ◽  
Vol 5 (Supplement_1) ◽  
pp. 978-978
Author(s):  
Laurie Ruggiero ◽  
Elizabeth Orsega-Smith ◽  
Roghayeh Barmaki

Abstract Exergames and digital health games have shown promising outcomes in older adults. Most games have had one focus (e.g., physical activity, cognitive functioning). We developed a demonstration version of a multi-focus educational exergame (i.e., healthy eating, physical activity, cognition) that builds on healthy aging theory. Community-engaged and mixed methods (e.g., surveys, focus groups) research approaches were used to examine preliminary game acceptability and usability. The game was demonstrated with 20 senior center members (95% female; 48% African American; 52% White; average age 64 years) and participants were able to play the game. The post-gameplay survey results support acceptability/usability of the game. For example, 87% of participants “agreed” or “strongly agreed” that they felt comfortable playing; the game instructions were clear; the text was readable; and gameplay was enjoyable. The majority also “agreed”/“strongly agreed” that the audio was appealing/helpful in playing the game (86%); sound quality was appropriate (78%); hand tracking was precise (57%), feedback on correct/incorrect responses was motivating (73%); they felt excited to get the correct answers (80%); they would play the game again (87%); and they would recommend it to a friend/family member (80%). When asked how often they would play it, the responses were: 33% five or more times/week; 27% three-four times/week; 20% one-two times/week; and 20% never. Observations and focus groups further clarified acceptability and identified areas for improvement (e.g., game instructions). Preliminary results support acceptability of this multi-component educational exergame with older adults and suggest the potential for future tailoring of this game.


Sign in / Sign up

Export Citation Format

Share Document