scholarly journals Human Tracking of a Crawler Robot in Climbing Stairs

2021 ◽  
Vol 33 (6) ◽  
pp. 1338-1348
Author(s):  
Yasuaki Orita ◽  
Kiyotsugu Takaba ◽  
Takanori Fukao ◽  
◽  

There are many reports of secondary damage to crews during firefighting operations. One way to support and enhance their activities is to get robots to track them and carry supplies. In this paper, we propose a localization method for stairs that includes scene detection. The proposed method allows a robot to track a person across stairs. First, the scene detection autonomously detects that the person is climbing the stairs. Then, the linear model representing the first step of the staircase is combined with the person’s trajectory for localization. The method uses omnidirectional imaging and point clouds, and the localization and scene detection are available from any posture around the stairs. Finally, using the localization result, the robot automatically navigates to a posture where it can climb the stairs. Verification confirmed the accuracy and real-time capability of the method and demonstrated that the actual crawler robot autonomously chooses a posture that is ready for climbing.

Author(s):  
Zhiyong Gao ◽  
Jianhong Xiang

Background: While detecting the object directly from the 3D point cloud, the natural 3D patterns and invariance of 3D data are often obscure. Objective: In this work, we aimed at studying the 3D object detection from discrete, disordered and sparse 3D point clouds. Methods: The CNN is composed of the frustum sequence module, 3D instance segmentation module S-NET, 3D point cloud transformation module T-NET, and 3D boundary box estimation module E-NET. The search space of the object is determined by the frustum sequence module. The instance segmentation of the point cloud is performed by the 3D instance segmentation module. The 3D coordinates of the object are confirmed by the transformation module and the 3D bounding box estimation module. Results: Evaluated on KITTI benchmark dataset, our method outperforms the state of the art by remarkable margins while having real-time capability. Conclusion: We achieve real-time 3D object detection by proposing an improved convolutional neural network (CNN) based on image-driven point clouds.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6425
Author(s):  
Daniel Ledwoń ◽  
Marta Danch-Wierzchowska ◽  
Marcin Bugdol ◽  
Karol Bibrowicz ◽  
Tomasz Szurmik ◽  
...  

Postural disorders, their prevention, and therapies are still growing modern problems. The currently used diagnostic methods are questionable due to the exposure to side effects (radiological methods) as well as being time-consuming and subjective (manual methods). Although the computer-aided diagnosis of posture disorders is well developed, there is still the need to improve existing solutions, search for new measurement methods, and create new algorithms for data processing. Based on point clouds from a Time-of-Flight camera, the presented method allows a non-contact, real-time detection of anatomical landmarks on the subject’s back and, thus, an objective determination of trunk surface metrics. Based on a comparison of the obtained results with the evaluation of three independent experts, the accuracy of the obtained results was confirmed. The average distance between the expert indications and method results for all landmarks was 27.73 mm. A direct comparison showed that the compared differences were statically significantly different; however, the effect was negligible. Compared with other automatic anatomical landmark detection methods, ours has a similar accuracy with the possibility of real-time analysis. The advantages of the presented method are non-invasiveness, non-contact, and the possibility of continuous observation, also during exercise. The proposed solution is another step in the general trend of objectivization in physiotherapeutic diagnostics.


Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6781
Author(s):  
Tomasz Nowak ◽  
Krzysztof Ćwian ◽  
Piotr Skrzypczyński

This article aims at demonstrating the feasibility of modern deep learning techniques for the real-time detection of non-stationary objects in point clouds obtained from 3-D light detecting and ranging (LiDAR) sensors. The motion segmentation task is considered in the application context of automotive Simultaneous Localization and Mapping (SLAM), where we often need to distinguish between the static parts of the environment with respect to which we localize the vehicle, and non-stationary objects that should not be included in the map for localization. Non-stationary objects do not provide repeatable readouts, because they can be in motion, like vehicles and pedestrians, or because they do not have a rigid, stable surface, like trees and lawns. The proposed approach exploits images synthesized from the received intensity data yielded by the modern LiDARs along with the usual range measurements. We demonstrate that non-stationary objects can be detected using neural network models trained with 2-D grayscale images in the supervised or unsupervised training process. This concept makes it possible to alleviate the lack of large datasets of 3-D laser scans with point-wise annotations for non-stationary objects. The point clouds are filtered using the corresponding intensity images with labeled pixels. Finally, we demonstrate that the detection of non-stationary objects using our approach improves the localization results and map consistency in a laser-based SLAM system.


Electronics ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 2084
Author(s):  
Junwon Lee ◽  
Kieun Lee ◽  
Aelee Yoo ◽  
Changjoo Moon

Self-driving cars, autonomous vehicles (AVs), and connected cars combine the Internet of Things (IoT) and automobile technologies, thus contributing to the development of society. However, processing the big data generated by AVs is a challenge due to overloading issues. Additionally, near real-time/real-time IoT services play a significant role in vehicle safety. Therefore, the architecture of an IoT system that collects and processes data, and provides services for vehicle driving, is an important consideration. In this study, we propose a fog computing server model that generates a high-definition (HD) map using light detection and ranging (LiDAR) data generated from an AV. The driving vehicle edge node transmits the LiDAR point cloud information to the fog server through a wireless network. The fog server generates an HD map by applying the Normal Distribution Transform-Simultaneous Localization and Mapping(NDT-SLAM) algorithm to the point clouds transmitted from the multiple edge nodes. Subsequently, the coordinate information of the HD map generated in the sensor frame is converted to the coordinate information of the global frame and transmitted to the cloud server. Then, the cloud server creates an HD map by integrating the collected point clouds using coordinate information.


Sign in / Sign up

Export Citation Format

Share Document