Real-time collision detection and distance computation on point cloud sensor data

Author(s):  
Jia Pan ◽  
Ioan A. Sucan ◽  
Sachin Chitta ◽  
Dinesh Manocha
2018 ◽  
Vol 7 (4.11) ◽  
pp. 179 ◽  
Author(s):  
M. R. Shahrin ◽  
F. H. Hashim ◽  
W. M.D.W. Zaki ◽  
A. Hussain ◽  
T. T. Raj

Most 3D scanners are heavy, bulky and costly. These are the major factors that make them irrelevant to be attached to a drone for autonomous navigation. With modern technologies, it is possible to design a simple 3D scanner for autonomous navigation. The objective of this study is to design a cost effective 3D indoor mapping system using a 2D light detection and ranging (LiDAR) sensor for a drone. This simple 3D scanner is realised using a LiDAR sensor together with two servo motors to create the azimuth and elevation axes. An Arduino Uno is used as the interface between the scanner and computer for the real-time communication via serial port. In addition, an open source Point-Cloud Tool software is used to test and view the 3D scanner data. To study the accuracy and efficiency of the system, the LiDAR sensor data from the scanner is obtained in real-time in point-cloud form. The experimental results proved that the proposed system can perform the 2D and 3D scans with tolerable performance.  


2018 ◽  
Vol 15 (6) ◽  
pp. 172988141882022 ◽  
Author(s):  
Gang Chen ◽  
Dan Liu ◽  
Yifan Wang ◽  
Qingxuan Jia ◽  
Xiaodong Zhang

Obstacle avoidance is of great importance for path planning of manipulators in dynamic environment. To help manipulators successfully perform tasks, a method of path planning with obstacle avoidance is proposed in this article. It consists of two consecutive phases, namely, collision detection and obstacle-avoidance path planning. The collision detection is realized by establishing point-cloud model and testing intersection of axis-aligned bounding boxes trees, while obstacle-avoidance path planning is achieved through preplanning a global path and adjusting it in real time. This article has the following contributions. The point-cloud model is of high resolution while the speed of collision detection is improved, and collision points can be found exactly. The preplanned global path is optimized based on the improved D-star algorithm, which reduces inflection points and decreases collision probability. The real-time path adjusting strategy satisfies the requirement of reachability and obstacle avoidance for manipulators in dynamic environment. Simulations and experiments are carried out to evaluate the validity of the proposed method, and the method is available to manipulators of any degree of freedom in dynamic environment.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3432 ◽  
Author(s):  
Gaopeng Zhao ◽  
Sixiong Xu ◽  
Yuming Bo

How to determine the relative pose between the chaser spacecraft and the high-speed tumbling target spacecraft at close range, which is an essential step in space proximity missions, is very challenging. This paper proposes a LiDAR-based pose tracking method by fusing depth maps and point clouds. The key point is to estimate the roll angle variation in adjacent sensor data by using the line detection and matching in depth maps. The simplification of adaptive voxelized grid point cloud based on the real-time relative position is adapted in order to satisfy the real-time requirement in the approaching process. In addition, the Iterative Closest Point algorithm is used to align the simplified sparse point cloud with the known target model point cloud in order to obtain the relative pose. Numerical experiments, which simulate the typical tumbling motion of the target and the approaching process, are performed to demonstrate the method. The experimental results show that the method has capability of estimating the real-time 6-DOF relative pose and dealing with large pose variations.


Author(s):  
Alexander Bigalke ◽  
Lasse Hansen ◽  
Jasper Diesel ◽  
Mattias P. Heinrich

Abstract Purpose Body weight is a crucial parameter for patient-specific treatments, particularly in the context of proper drug dosage. Contactless weight estimation from visual sensor data constitutes a promising approach to overcome challenges arising in emergency situations. Machine learning-based methods have recently been shown to perform accurate weight estimation from point cloud data. The proposed methods, however, are designed for controlled conditions in terms of visibility and position of the patient, which limits their practical applicability. In this work, we aim to decouple accurate weight estimation from such specific conditions by predicting the weight of covered patients from voxelized point cloud data. Methods We propose a novel deep learning framework, which comprises two 3D CNN modules solving the given task in two separate steps. First, we train a 3D U-Net to virtually uncover the patient, i.e. to predict the patient’s volumetric surface without a cover. Second, the patient’s weight is predicted from this 3D volume by means of a 3D CNN architecture, which we optimized for weight regression. Results We evaluate our approach on a lying pose dataset (SLP) under two different cover conditions. The proposed framework considerably improves on the baseline model by up to $${16}{\%}$$ 16 % and reduces the gap between the accuracy of weight estimates for covered and uncovered patients by up to $${52}{\%}$$ 52 % . Conclusion We present a novel pipeline to estimate the weight of patients, which are covered by a blanket. Our approach relaxes the specific conditions that were required for accurate weight estimates by previous contactless methods and thus constitutes an important step towards fully automatic weight estimation in clinical practice.


2021 ◽  
Vol 1910 (1) ◽  
pp. 012002
Author(s):  
Chao He ◽  
Jiayuan Gong ◽  
Yahui Yang ◽  
Dong Bi ◽  
Jianpin Lan ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 642
Author(s):  
Luis Miguel González de Santos ◽  
Ernesto Frías Nores ◽  
Joaquín Martínez Sánchez ◽  
Higinio González Jorge

Nowadays, unmanned aerial vehicles (UAVs) are extensively used for multiple purposes, such as infrastructure inspections or surveillance. This paper presents a real-time path planning algorithm in indoor environments designed to perform contact inspection tasks using UAVs. The only input used by this algorithm is the point cloud of the building where the UAV is going to navigate. The algorithm is divided into two main parts. The first one is the pre-processing algorithm that processes the point cloud, segmenting it into rooms and discretizing each room. The second part is the path planning algorithm that has to be executed in real time. In this way, all the computational load is in the first step, which is pre-processed, making the path calculation algorithm faster. The method has been tested in different buildings, measuring the execution time for different paths calculations. As can be seen in the results section, the developed algorithm is able to calculate a new path in 8–9 milliseconds. The developed algorithm fulfils the execution time restrictions, and it has proven to be reliable for route calculation.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 405
Author(s):  
Marcos Lupión ◽  
Javier Medina-Quero ◽  
Juan F. Sanjuan ◽  
Pilar M. Ortigosa

Activity Recognition (AR) is an active research topic focused on detecting human actions and behaviours in smart environments. In this work, we present the on-line activity recognition platform DOLARS (Distributed On-line Activity Recognition System) where data from heterogeneous sensors are evaluated in real time, including binary, wearable and location sensors. Different descriptors and metrics from the heterogeneous sensor data are integrated in a common feature vector whose extraction is developed by a sliding window approach under real-time conditions. DOLARS provides a distributed architecture where: (i) stages for processing data in AR are deployed in distributed nodes, (ii) temporal cache modules compute metrics which aggregate sensor data for computing feature vectors in an efficient way; (iii) publish-subscribe models are integrated both to spread data from sensors and orchestrate the nodes (communication and replication) for computing AR and (iv) machine learning algorithms are used to classify and recognize the activities. A successful case study of daily activities recognition developed in the Smart Lab of The University of Almería (UAL) is presented in this paper. Results present an encouraging performance in recognition of sequences of activities and show the need for distributed architectures to achieve real time recognition.


Sign in / Sign up

Export Citation Format

Share Document