Depth Camera Based Real-Time Fingertip Detection Using Multi-view Projection

Author(s):  
Weixin Yang ◽  
Zhengyang Zhong ◽  
Xin Zhang ◽  
Lianwen Jin ◽  
Chenlin Xiong ◽  
...  
2017 ◽  
Vol 56 (3) ◽  
pp. 033104 ◽  
Author(s):  
Xingyin Fu ◽  
Feng Zhu ◽  
Feng Qi ◽  
Mingming Wang

Author(s):  
Dinh-Son Tran ◽  
Ngoc-Huynh Ho ◽  
Hyung-Jeong Yang ◽  
Soo-Hyung Kim ◽  
Guee Sang Lee

AbstractA real-time fingertip-gesture-based interface is still challenging for human–computer interactions, due to sensor noise, changing light levels, and the complexity of tracking a fingertip across a variety of subjects. Using fingertip tracking as a virtual mouse is a popular method of interacting with computers without a mouse device. In this work, we propose a novel virtual-mouse method using RGB-D images and fingertip detection. The hand region of interest and the center of the palm are first extracted using in-depth skeleton-joint information images from a Microsoft Kinect Sensor version 2, and then converted into a binary image. Then, the contours of the hands are extracted and described by a border-tracing algorithm. The K-cosine algorithm is used to detect the fingertip location, based on the hand-contour coordinates. Finally, the fingertip location is mapped to RGB images to control the mouse cursor based on a virtual screen. The system tracks fingertips in real-time at 30 FPS on a desktop computer using a single CPU and Kinect V2. The experimental results showed a high accuracy level; the system can work well in real-world environments with a single CPU. This fingertip-gesture-based interface allows humans to easily interact with computers by hand.


Proceedings ◽  
2020 ◽  
Vol 39 (1) ◽  
pp. 18
Author(s):  
Nenchoo ◽  
Tantrairatn

This paper presents an estimation of 3D UAV position in real-time condition by using Intel RealSense Depth camera D435i with visual object detection technique as a local positioning system for indoor environment. Nowadays, global positioning system or GPS is able to specify UAV position for outdoor environment. However, for indoor environment GPS hasn’t a capability to determine UAV position. Therefore, Depth stereo camera D435i is proposed to observe on ground to specify UAV position for indoor environment instead of GPS. Using deep learning for object detection to identify target object with depth camera to specifies 2D position of target object. In addition, depth position is estimated by stereo camera and target size. For experiment, Parrot Bebop2 as a target object is detected by using YOLOv3 as a real-time object detection system. However, trained Fully Convolutional Neural Networks (FCNNs) model is considerably significant for object detection, thus the model has been trained for bebop2 only. To conclude, this proposed system is able to specifies 3D position of bebop2 for indoor environment. For future work, this research will be developed and apply for visualized navigation control of drone swarm.


Author(s):  
Andreas Baak ◽  
Meinard Muller ◽  
Gaurav Bharaj ◽  
Hans-Peter Seidel ◽  
Christian Theobalt

2015 ◽  
Vol 320 ◽  
pp. 346-360 ◽  
Author(s):  
Li Sun ◽  
Zicheng Liu ◽  
Ming-Ting Sun

Sign in / Sign up

Export Citation Format

Share Document