scholarly journals Gait energy volumes and frontal gait recognition using depth images

Author(s):  
Sabesan Sivapalan ◽  
Daniel Chen ◽  
Simon Denman ◽  
Sridha Sridharan ◽  
Clinton Fookes
2014 ◽  
Vol 644-650 ◽  
pp. 1015-1018 ◽  
Author(s):  
Hao Lin Zhang ◽  
Xian Ye Ben ◽  
Peng Zhang ◽  
Tian Jiao Liu

Gait period detection, serving as a preprocessor for gait recognition, is commonly studied in the recent past. In this paper, we proposed a novel gait period detection method for depth gait video stream. The method introduces the concept of layered coding for depth images which decreases computational complexity. Furthermore, the extreme value of the sum of layered codes for gait sequence is utilized to judge the period endpoint, which is in accord with the naked-eye observation. In addition, gait recognition experiments on the TUM GAID database are conducted with the description of gait features of one single detected period by the proposed scheme using tensor representation. The high recognition accuracy verifies the effectiveness of the proposed depth gait period detection method.


2021 ◽  
Vol 34 (4) ◽  
pp. 557-567
Author(s):  
Adnan Ramakic ◽  
Zlatko Bundalo ◽  
Zeljko Vidovic

In this paper we present some features that may be used in person gait recognition applications. Gait recognition is an interesting way of people identification. During a gait cycle, each person creates unique patterns that can be used for people identification. Also, gait recognition methods ordinarily do not need interaction with a person and that is the main advantage of these methods. Features used in a person gait recognition methods can be obtained with widely available RGB and RGB-D cameras. In this paper we present a two features which are suitable for use in gait recognition applications. Mentioned features are height of a person and step length of a person. They may be extracted and were extracted from depth images obtained from RGB-D camera. For experimental purposes, we used a custom dataset created in outdoor environment using a long-range stereo camera.


Author(s):  
Sukhendra Singh ◽  
G. N. Rathna ◽  
Vivek Singhal

Introduction: Sign language is the only way to communicate for speech-impaired people. But this sign language is not known to normal people so this is the cause of barrier in communicating. This is the problem faced by speech impaired people. In this paper, we have presented our solution which captured hand gestures with Kinect camera and classified the hand gesture into its correct symbol. Method: We used Kinect camera not the ordinary web camera because the ordinary camera does not capture its 3d orientation or depth of an image from camera however Kinect camera can capture 3d image and this will make classification more accurate. Result: Kinect camera will produce a different image for hand gestures for ‘2’ and ‘V’ and similarly for ‘1’ and ‘I’ however, normal web camera will not be able to distinguish between these two. We used hand gesture for Indian sign language and our dataset had 46339, RGB images and 46339 depth images. 80% of the total images were used for training and the remaining 20% for testing. In total 36 hand gestures were considered to capture alphabets and alphabets from A-Z and 10 for numeric, 26 for digits from 0-9 were considered to capture alphabets and Keywords. Conclusion: Along with real-time implementation, we have also shown the comparison of the performance of the various machine learning models in which we have found out the accuracy of CNN on depth- images has given the most accurate performance than other models. All these resulted were obtained on PYNQ Z2 board.


2004 ◽  
Author(s):  
Zongyi Liu ◽  
Laura Malave ◽  
Adebola Osuntogun ◽  
Preksha Sudhakar ◽  
Sudeep Sarkar
Keyword(s):  

2021 ◽  
Vol 13 (5) ◽  
pp. 935
Author(s):  
Matthew Varnam ◽  
Mike Burton ◽  
Ben Esse ◽  
Giuseppe Salerno ◽  
Ryunosuke Kazahaya ◽  
...  

SO2 cameras are able to measure rapid changes in volcanic emission rate but require accurate calibrations and corrections to convert optical depth images into slant column densities. We conducted a test at Masaya volcano of two SO2 camera calibration approaches, calibration cells and co-located spectrometer, and corrected both calibrations for light dilution, a process caused by light scattering between the plume and camera. We demonstrate an advancement on the image-based correction that allows the retrieval of the scattering efficiency across a 2D area of an SO2 camera image. When appropriately corrected for the dilution, we show that our two calibration approaches produce final calculated emission rates that agree with simultaneously measured traverse flux data and each other but highlight that the observed distribution of gas within the image is different. We demonstrate that traverses and SO2 camera techniques, when used together, generate better plume speed estimates for traverses and improved knowledge of wind direction for the camera, producing more reliable emission rates. We suggest combining traverses and the SO2 camera should be adopted where possible.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1299
Author(s):  
Honglin Yuan ◽  
Tim Hoogenkamp ◽  
Remco C. Veltkamp

Deep learning has achieved great success on robotic vision tasks. However, when compared with other vision-based tasks, it is difficult to collect a representative and sufficiently large training set for six-dimensional (6D) object pose estimation, due to the inherent difficulty of data collection. In this paper, we propose the RobotP dataset consisting of commonly used objects for benchmarking in 6D object pose estimation. To create the dataset, we apply a 3D reconstruction pipeline to produce high-quality depth images, ground truth poses, and 3D models for well-selected objects. Subsequently, based on the generated data, we produce object segmentation masks and two-dimensional (2D) bounding boxes automatically. To further enrich the data, we synthesize a large number of photo-realistic color-and-depth image pairs with ground truth 6D poses. Our dataset is freely distributed to research groups by the Shape Retrieval Challenge benchmark on 6D pose estimation. Based on our benchmark, different learning-based approaches are trained and tested by the unified dataset. The evaluation results indicate that there is considerable room for improvement in 6D object pose estimation, particularly for objects with dark colors, and photo-realistic images are helpful in increasing the performance of pose estimation algorithms.


Sign in / Sign up

Export Citation Format

Share Document