scholarly journals HRDepthNet: Depth Image-Based Marker-Less Tracking of Body Joints

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1356
Author(s):  
Linda Christin Büker ◽  
Finnja Zuber ◽  
Andreas Hein ◽  
Sebastian Fudickar

With approaches for the detection of joint positions in color images such as HRNet and OpenPose being available, consideration of corresponding approaches for depth images is limited even though depth images have several advantages over color images like robustness to light variation or color- and texture invariance. Correspondingly, we introduce High- Resolution Depth Net (HRDepthNet)—a machine learning driven approach to detect human joints (body, head, and upper and lower extremities) in purely depth images. HRDepthNet retrains the original HRNet for depth images. Therefore, a dataset is created holding depth (and RGB) images recorded with subjects conducting the timed up and go test—an established geriatric assessment. The images were manually annotated RGB images. The training and evaluation were conducted with this dataset. For accuracy evaluation, detection of body joints was evaluated via COCO’s evaluation metrics and indicated that the resulting depth image-based model achieved better results than the HRNet trained and applied on corresponding RGB images. An additional evaluation of the position errors showed a median deviation of 1.619 cm (x-axis), 2.342 cm (y-axis) and 2.4 cm (z-axis).

2014 ◽  
Vol 556-562 ◽  
pp. 4081-4084
Author(s):  
Li Jun Zhang ◽  
Fei Chen

The paper proposes a novel monocular visual odometer method based on Kinect sensor made by Microsoft and the improved SURF algorithm. Firstly the Kinect sensor capture color images and depth images of the surrounding environment, then we use the improved SURF algorithm to extract feature points of the color images and match for them. At last, map what we get with the depth image and estimate the path information of the robot by doing 3D reconstruction and using the the least square mean value theorem. Experimental results show that by using this new method, the average matching accuracy reaches 92.6%. And even in a dynamic environment, it shows good robustness, so it comes down to the conclusion that the combination of the Kinect sensor and the improved SURF algorithm applied to visual odometer is a simple and effective method.


2020 ◽  
Vol 12 (7) ◽  
pp. 1142
Author(s):  
Jeonghoon Kwak ◽  
Yunsick Sung

To provide a realistic environment for remote sensing applications, point clouds are used to realize a three-dimensional (3D) digital world for the user. Motion recognition of objects, e.g., humans, is required to provide realistic experiences in the 3D digital world. To recognize a user’s motions, 3D landmarks are provided by analyzing a 3D point cloud collected through a light detection and ranging (LiDAR) system or a red green blue (RGB) image collected visually. However, manual supervision is required to extract 3D landmarks as to whether they originate from the RGB image or the 3D point cloud. Thus, there is a need for a method for extracting 3D landmarks without manual supervision. Herein, an RGB image and a 3D point cloud are used to extract 3D landmarks. The 3D point cloud is utilized as the relative distance between a LiDAR and a user. Because it cannot contain all information the user’s entire body due to disparities, it cannot generate a dense depth image that provides the boundary of user’s body. Therefore, up-sampling is performed to increase the density of the depth image generated based on the 3D point cloud; the density depends on the 3D point cloud. This paper proposes a system for extracting 3D landmarks using 3D point clouds and RGB images without manual supervision. A depth image provides the boundary of a user’s motion and is generated by using 3D point cloud and RGB image collected by a LiDAR and an RGB camera, respectively. To extract 3D landmarks automatically, an encoder–decoder model is trained with the generated depth images, and the RGB images and 3D landmarks are extracted from these images with the trained encoder model. The method of extracting 3D landmarks using RGB depth (RGBD) images was verified experimentally, and 3D landmarks were extracted to evaluate the user’s motions with RGBD images. In this manner, landmarks could be extracted according to the user’s motions, rather than by extracting them using the RGB images. The depth images generated by the proposed method were 1.832 times denser than the up-sampling-based depth images generated with bilateral filtering.


2021 ◽  
pp. 1-21
Author(s):  
S.S. Suni ◽  
K. Gopakumar

In this study, we propose a multimodal feature based framework for recognising hand gestures from RGB and depth images. In addition to the features from the RGB image, the depth image features are explored into constructing the discriminative feature labels of various gestures. Depth maps having powerful source of information, increases the performance level of various computer vision problems. A newly refined Gradient-Local Binary Pattern (G-LBP) is applied to extract the features from depth images and histogram of gradients (HOG) features are extracted from RGB images. The components from both RGB and depth channels, are concatenated to form a multimodal feature vector. In the final process, classification is performed using K-Nearest Neighbour and multi-class Support Vector Machines. The designed system is invariant to scale, rotation and illumination. The newly developed feature combination method is helpful to achieve superior recognition rates for future innovations.


2020 ◽  
Author(s):  
Rui Fan ◽  
Hengli Wang ◽  
Junaid Bocus ◽  
Ming Liu

<div>Manual visual inspection, typically performed by certified inspectors, is still the main form of road pothole detection. This process is, however, not only tedious, time-consuming and costly, but also dangerous for the inspectors. Furthermore, the road pothole detection results are always subjective, because they depend entirely on the inspector's experience. In this paper, we first introduce a disparity (or inverse depth) image processing module, named quasi inverse perspective transformation (QIPT), which can make the damaged road areas become highly distinguishable. Then, we propose a novel attention aggregation (AA) framework, which can improve the semantic segmentation networks for better road pothole detection, by taking the advantages of different types of attention modules. Moreover, we develop a novel training set augmentation technique based on adversarial domain adaptation, where synthetic road RGB images and transformed road disparity (or inverse depth) images are generated to enhance the training of semantic segmentation networks.</div><div>The experimental results illustrate that, firstly, the disparity (or inverse depth) images transformed by our QIPT module become more informative; secondly, the adversarial domain adaptation can not only significantly improve the performance of the state-of-the-art semantic segmentation networks, but also accelerate their convergence. In addition, AA-UNet and AA-RTFNet, our best performing implementations, respectively outperform all other state-of-the-art single-modal and data-fusion networks for road pothole detection.</div>


2020 ◽  
Author(s):  
Rui Fan ◽  
Hengli Wang ◽  
Junaid Bocus ◽  
Ming Liu

<div>Manual visual inspection, typically performed by certified inspectors, is still the main form of road pothole detection. This process is, however, not only tedious, time-consuming and costly, but also dangerous for the inspectors. Furthermore, the road pothole detection results are always subjective, because they depend entirely on the inspector's experience. In this paper, we first introduce a disparity (or inverse depth) image processing module, named quasi inverse perspective transformation (QIPT), which can make the damaged road areas become highly distinguishable. Then, we propose a novel attention aggregation (AA) framework, which can improve the semantic segmentation networks for better road pothole detection, by taking the advantages of different types of attention modules. Moreover, we develop a novel training set augmentation technique based on adversarial domain adaptation, where synthetic road RGB images and transformed road disparity (or inverse depth) images are generated to enhance the training of semantic segmentation networks.</div><div>The experimental results illustrate that, firstly, the disparity (or inverse depth) images transformed by our QIPT module become more informative; secondly, the adversarial domain adaptation can not only significantly improve the performance of the state-of-the-art semantic segmentation networks, but also accelerate their convergence. In addition, AA-UNet and AA-RTFNet, our best performing implementations, respectively outperform all other state-of-the-art single-modal and data-fusion networks for road pothole detection.</div>


2020 ◽  
Author(s):  
Rui Fan ◽  
Hengli Wang ◽  
Junaid Bocus ◽  
Ming Liu

<div>Manual visual inspection, typically performed by certified inspectors, is still the main form of road pothole detection. This process is, however, not only tedious, time-consuming and costly, but also dangerous for the inspectors. Furthermore, the road pothole detection results are always subjective, because they depend entirely on the inspector's experience. In this paper, we first introduce a disparity (or inverse depth) image processing module, named quasi inverse perspective transformation (QIPT), which can make the damaged road areas become highly distinguishable. Then, we propose a novel attention aggregation (AA) framework, which can improve the semantic segmentation networks for better road pothole detection, by taking the advantages of different types of attention modules. Moreover, we develop a novel training set augmentation technique based on adversarial domain adaptation, where synthetic road RGB images and transformed road disparity (or inverse depth) images are generated to enhance the training of semantic segmentation networks.</div><div>The experimental results illustrate that, firstly, the disparity (or inverse depth) images transformed by our QIPT module become more informative; secondly, the adversarial domain adaptation can not only significantly improve the performance of the state-of-the-art semantic segmentation networks, but also accelerate their convergence. In addition, AA-UNet and AA-RTFNet, our best performing implementations, respectively outperform all other state-of-the-art single-modal and data-fusion networks for road pothole detection.</div>


2021 ◽  
Author(s):  
Saddam Abdulwahab ◽  
Hatem A. Rashwan ◽  
Armin Masoumian ◽  
Najwa Sharaf ◽  
Domenec Puig

Pose estimation is typically performed through 3D images. In contrast, estimating the pose from a single RGB image is still a difficult task. RGB images do not only represent objects’ shape, but also represent the intensity that is relative to the viewpoint, texture, and lighting condition. While the 3D pose estimation from depth images is considered a promising approach since the depth image only represents objects’ shape. Thus, it is necessary to know what is the appropriate method that can be used for predicting the depth image from a 2D RGB image and then to use for getting the 3D pose estimation. In this paper, we propose a promising approach based on a deep learning model for depth estimation in order to improve the 3D pose estimation. The proposed model consists of two successive networks. The first network is an autoencoder network that maps from the RGB domain to the depth domain. The second network is a discriminator network that compares a real depth image to a generated depth image to support the first network to generate an accurate depth image. In this work, we do not use real depth images corresponding to the input color images. Our contribution is to use 3D CAD models corresponding to objects appearing in color images to render depth images from different viewpoints. These rendered images are then used as ground truth and to guide the autoencoder network to learn the mapping from the image domain to the depth domain. The proposed model outperforms state-of-the-art models on the publicly PASCAL 3D+ dataset.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1299
Author(s):  
Honglin Yuan ◽  
Tim Hoogenkamp ◽  
Remco C. Veltkamp

Deep learning has achieved great success on robotic vision tasks. However, when compared with other vision-based tasks, it is difficult to collect a representative and sufficiently large training set for six-dimensional (6D) object pose estimation, due to the inherent difficulty of data collection. In this paper, we propose the RobotP dataset consisting of commonly used objects for benchmarking in 6D object pose estimation. To create the dataset, we apply a 3D reconstruction pipeline to produce high-quality depth images, ground truth poses, and 3D models for well-selected objects. Subsequently, based on the generated data, we produce object segmentation masks and two-dimensional (2D) bounding boxes automatically. To further enrich the data, we synthesize a large number of photo-realistic color-and-depth image pairs with ground truth 6D poses. Our dataset is freely distributed to research groups by the Shape Retrieval Challenge benchmark on 6D pose estimation. Based on our benchmark, different learning-based approaches are trained and tested by the unified dataset. The evaluation results indicate that there is considerable room for improvement in 6D object pose estimation, particularly for objects with dark colors, and photo-realistic images are helpful in increasing the performance of pose estimation algorithms.


Mathematics ◽  
2021 ◽  
Vol 9 (21) ◽  
pp. 2815
Author(s):  
Shih-Hung Yang ◽  
Yao-Mao Cheng ◽  
Jyun-We Huang ◽  
Yon-Ping Chen

Automatic fingerspelling recognition tackles the communication barrier between deaf and hearing individuals. However, the accuracy of fingerspelling recognition is reduced by high intra-class variability and low inter-class variability. In the existing methods, regular convolutional kernels, which have limited receptive fields (RFs) and often cannot detect subtle discriminative details, are applied to learn features. In this study, we propose a receptive field-aware network with finger attention (RFaNet) that highlights the finger regions and builds inter-finger relations. To highlight the discriminative details of these fingers, RFaNet reweights the low-level features of the hand depth image with those of the non-forearm image and improves finger localization, even when the wrist is occluded. RFaNet captures neighboring and inter-region dependencies between fingers in high-level features. An atrous convolution procedure enlarges the RFs at multiple scales and a non-local operation computes the interactions between multi-scale feature maps, thereby facilitating the building of inter-finger relations. Thus, the representation of a sign is invariant to viewpoint changes, which are primarily responsible for intra-class variability. On an American Sign Language fingerspelling dataset, RFaNet achieved 1.77% higher classification accuracy than state-of-the-art methods. RFaNet achieved effective transfer learning when the number of labeled depth images was insufficient. The fingerspelling representation of a depth image can be effectively transferred from large- to small-scale datasets via highlighting the finger regions and building inter-finger relations, thereby reducing the requirement for expensive fingerspelling annotations.


Sensors ◽  
2019 ◽  
Vol 19 (11) ◽  
pp. 2553 ◽  
Author(s):  
Jingwen Cui ◽  
Jianping Zhang ◽  
Guiling Sun ◽  
Bowen Zheng

Based on computer vision technology, this paper proposes a method for identifying and locating crops in order to successfully capture crops in the process of automatic crop picking. This method innovatively combines the YOLOv3 algorithm under the DarkNet framework with the point cloud image coordinate matching method, and can achieve the goal of this paper very well. Firstly, RGB (RGB is the color representing the three channels of red, green and blue) images and depth images are obtained by using the Kinect v2 depth camera. Secondly, the YOLOv3 algorithm is used to identify the various types of target crops in the RGB images, and the feature points of the target crops are determined. Finally, the 3D coordinates of the feature points are displayed on the point cloud images. Compared with other methods, this method of crop identification has high accuracy and small positioning error, which lays a good foundation for the subsequent harvesting of crops using mechanical arms. In summary, the method used in this paper can be considered effective.


Sign in / Sign up

Export Citation Format

Share Document