scholarly journals Secure and Efficient Transmission of Vision-Based Feedback Control Signals

2021 ◽  
Vol 103 (2) ◽  
Author(s):  
Øystein Volden ◽  
Petter Solnør ◽  
Slobodan Petrovic ◽  
Thor I. Fossen

AbstractAn ever-increasing number of autonomous vehicles use bandwidth-greedy sensors such as cameras and LiDARs to sense and act to the world around us. Unfortunately, signal transmission in vehicles is vulnerable to passive and active cyber-physical attacks that may result in loss of intellectual property, or worse yet, the loss of control of a vehicle, potentially causing great harm. Therefore, it is important to investigate efficient cryptographic methods to secure signal transmission in such vehicles against outside threats. This study is motivated by the observation that previous publications have suggested legacy algorithms, which are either inefficient or insecure for vision-based signals. We show how stream ciphers and authenticated encryption can be applied to transfer sensor data securely and efficiently between computing devices suitable for distributed guidance, navigation, and control systems. We provide an efficient and flexible pipeline of cryptographic operations on image and point cloud data in the Robot Operating System (ROS). We also demonstrate how image data can be compressed to reduce the amount of data to be encrypted, transmitted, and decrypted. Experiments on embedded computers verify that modern software cryptographic algorithms perform very well on large sensor data. Hence, the introduction of such algorithms should enhance security without significantly compromising the overall performance.

Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Author(s):  
Alexander Bigalke ◽  
Lasse Hansen ◽  
Jasper Diesel ◽  
Mattias P. Heinrich

Abstract Purpose Body weight is a crucial parameter for patient-specific treatments, particularly in the context of proper drug dosage. Contactless weight estimation from visual sensor data constitutes a promising approach to overcome challenges arising in emergency situations. Machine learning-based methods have recently been shown to perform accurate weight estimation from point cloud data. The proposed methods, however, are designed for controlled conditions in terms of visibility and position of the patient, which limits their practical applicability. In this work, we aim to decouple accurate weight estimation from such specific conditions by predicting the weight of covered patients from voxelized point cloud data. Methods We propose a novel deep learning framework, which comprises two 3D CNN modules solving the given task in two separate steps. First, we train a 3D U-Net to virtually uncover the patient, i.e. to predict the patient’s volumetric surface without a cover. Second, the patient’s weight is predicted from this 3D volume by means of a 3D CNN architecture, which we optimized for weight regression. Results We evaluate our approach on a lying pose dataset (SLP) under two different cover conditions. The proposed framework considerably improves on the baseline model by up to $${16}{\%}$$ 16 % and reduces the gap between the accuracy of weight estimates for covered and uncovered patients by up to $${52}{\%}$$ 52 % . Conclusion We present a novel pipeline to estimate the weight of patients, which are covered by a blanket. Our approach relaxes the specific conditions that were required for accurate weight estimates by previous contactless methods and thus constitutes an important step towards fully automatic weight estimation in clinical practice.


2021 ◽  
Vol 65 (1) ◽  
pp. 10501-1-10501-9
Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian ◽  
Xiushan Lu

Abstract The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Author(s):  
Vani Suthamathi Saravanarajan ◽  
Rung-Ching Chen ◽  
Long-Sheng Chen

2021 ◽  
Vol 13 (17) ◽  
pp. 3417
Author(s):  
Yibo He ◽  
Zhenqi Hu ◽  
Kan Wu ◽  
Rui Wang

Repairing point cloud holes has become an important problem in the research of 3D laser point cloud data, which ensures the integrity and improves the precision of point cloud data. However, for the point cloud data with non-characteristic holes, the boundary data of point cloud holes cannot be used for repairing. Therefore, this paper introduces photogrammetry technology and analyzes the density of the image point cloud data with the highest precision. The 3D laser point cloud data are first formed into hole data with sharp features. The image data are calculated into six density image point cloud data. Next, the barycenterization Bursa model is used to fine-register the two types of data and to delete the overlapping regions. Then, the cross-section is used to evaluate the precision of the combined point cloud data to get the optimal density. A three-dimensional model is constructed for this data and the original point cloud data, respectively and the surface area method and the deviation method are used to compare them. The experimental results show that the ratio of the areas is less than 0.5%, and the maximum standard deviation is 0.0036 m and the minimum is 0.0015 m.


Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6423
Author(s):  
Jorge L. Martínez ◽  
Jesús Morales ◽  
Manuel Sánchez ◽  
Mariano Morán ◽  
Antonio J. Reina ◽  
...  

Reactivity is a key component for autonomous vehicles navigating on natural terrains in order to safely avoid unknown obstacles. To this end, it is necessary to continuously assess traversability by processing on-board sensor data. This paper describes the case study of mobile robot Andabata that classifies traversable points from 3D laser scans acquired in motion of its vicinity to build 2D local traversability maps. Realistic robotic simulations with Gazebo were employed to appropriately adjust reactive behaviors. As a result, successful navigation tests with Andabata using the robot operating system (ROS) were performed on natural environments at low speeds.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7221
Author(s):  
Baifan Chen ◽  
Hong Chen ◽  
Dian Yuan ◽  
Lingli Yu

The object detection algorithm based on vehicle-mounted lidar is a key component of the perception system on autonomous vehicles. It can provide high-precision and highly robust obstacle information for the safe driving of autonomous vehicles. However, most algorithms are often based on a large amount of point cloud data, which makes real-time detection difficult. To solve this problem, this paper proposes a 3D fast object detection method based on three main steps: First, the ground segmentation by discriminant image (GSDI) method is used to convert point cloud data into discriminant images for ground points segmentation, which avoids the direct computing of the point cloud data and improves the efficiency of ground points segmentation. Second, the image detector is used to generate the region of interest of the three-dimensional object, which effectively narrows the search range. Finally, the dynamic distance threshold clustering (DDTC) method is designed for different density of the point cloud data, which improves the detection effect of long-distance objects and avoids the over-segmentation phenomenon generated by the traditional algorithm. Experiments have showed that this algorithm can meet the real-time requirements of autonomous driving while maintaining high accuracy.


Sign in / Sign up

Export Citation Format

Share Document