scholarly journals Seeing under the cover with a 3D U-Net: point cloud-based weight estimation of covered patients

Author(s):  
Alexander Bigalke ◽  
Lasse Hansen ◽  
Jasper Diesel ◽  
Mattias P. Heinrich

Abstract Purpose Body weight is a crucial parameter for patient-specific treatments, particularly in the context of proper drug dosage. Contactless weight estimation from visual sensor data constitutes a promising approach to overcome challenges arising in emergency situations. Machine learning-based methods have recently been shown to perform accurate weight estimation from point cloud data. The proposed methods, however, are designed for controlled conditions in terms of visibility and position of the patient, which limits their practical applicability. In this work, we aim to decouple accurate weight estimation from such specific conditions by predicting the weight of covered patients from voxelized point cloud data. Methods We propose a novel deep learning framework, which comprises two 3D CNN modules solving the given task in two separate steps. First, we train a 3D U-Net to virtually uncover the patient, i.e. to predict the patient’s volumetric surface without a cover. Second, the patient’s weight is predicted from this 3D volume by means of a 3D CNN architecture, which we optimized for weight regression. Results We evaluate our approach on a lying pose dataset (SLP) under two different cover conditions. The proposed framework considerably improves on the baseline model by up to $${16}{\%}$$ 16 % and reduces the gap between the accuracy of weight estimates for covered and uncovered patients by up to $${52}{\%}$$ 52 % . Conclusion We present a novel pipeline to estimate the weight of patients, which are covered by a blanket. Our approach relaxes the specific conditions that were required for accurate weight estimates by previous contactless methods and thus constitutes an important step towards fully automatic weight estimation in clinical practice.

Sensors ◽  
2020 ◽  
Vol 20 (10) ◽  
pp. 2763
Author(s):  
Kanghee Choi ◽  
Giyoung Byun ◽  
Ayoung Kim ◽  
Youngchul Kim

To prevent driver accidents in cities, local governments have established policies to limit city speeds and create child protection zones near schools. However, if the same policy is applied throughout a city, it can be difficult to obtain smooth traffic flows. A driver generally obtains visual information while driving, and this information is directly related to traffic safety. In this study, we propose a novel geometric visual model to measure drivers’ visual perception and analyze the corresponding information using the line-of-sight method. Three-dimensional point cloud data are used to analyze on-site three-dimensional elements in a city, such as roadside trees and overpasses, which are normally neglected in urban spatial analyses. To investigate drivers’ visual perceptions of roads, we have developed an analytic model of three types of visual perception. By using this proposed method, this study creates a risk-level map according to the driver’s visual perception degree in Pangyo, South Korea. With the point cloud data from Pangyo, it is possible to analyze actual urban forms such as roadside trees, building shapes, and overpasses that are normally excluded from spatial analyses that use a reconstructed virtual space.


Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 636 ◽  
Author(s):  
Joacim Dybedal ◽  
Atle Aalerud ◽  
Geir Hovland

This paper presents a scalable embedded solution for processing and transferring 3D point cloud data. Sensors based on the time-of-flight principle generate data which are processed on a local embedded computer and compressed using an octree-based scheme. The compressed data is transferred to a central node where the individual point clouds from several nodes are decompressed and filtered based on a novel method for generating intensity values for sensors which do not natively produce such a value. The paper presents experimental results from a relatively large industrial robot cell with an approximate size of 10 m × 10 m × 4 m. The main advantage of processing point cloud data locally on the nodes is scalability. The proposed solution could, with a dedicated Gigabit Ethernet local network, be scaled up to approximately 440 sensor nodes, only limited by the processing power of the central node that is receiving the compressed data from the local nodes. A compression ratio of 40.5 was obtained when compressing a point cloud stream from a single Microsoft Kinect V2 sensor using an octree resolution of 4 cm.


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Author(s):  
Keisuke YOSHIDA ◽  
Shiro MAENO ◽  
Syuhei OGAWA ◽  
Sadayuki ISEKI ◽  
Ryosuke AKOH

2019 ◽  
Author(s):  
Byeongjun Oh ◽  
Minju Kim ◽  
Chanwoo Lee ◽  
Hunhee Cho ◽  
Kyung-In Kang

Sign in / Sign up

Export Citation Format

Share Document