scholarly journals Real-Time Detection of Non-Stationary Objects Using Intensity Data in Automotive LiDAR SLAM

Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6781
Author(s):  
Tomasz Nowak ◽  
Krzysztof Ćwian ◽  
Piotr Skrzypczyński

This article aims at demonstrating the feasibility of modern deep learning techniques for the real-time detection of non-stationary objects in point clouds obtained from 3-D light detecting and ranging (LiDAR) sensors. The motion segmentation task is considered in the application context of automotive Simultaneous Localization and Mapping (SLAM), where we often need to distinguish between the static parts of the environment with respect to which we localize the vehicle, and non-stationary objects that should not be included in the map for localization. Non-stationary objects do not provide repeatable readouts, because they can be in motion, like vehicles and pedestrians, or because they do not have a rigid, stable surface, like trees and lawns. The proposed approach exploits images synthesized from the received intensity data yielded by the modern LiDARs along with the usual range measurements. We demonstrate that non-stationary objects can be detected using neural network models trained with 2-D grayscale images in the supervised or unsupervised training process. This concept makes it possible to alleviate the lack of large datasets of 3-D laser scans with point-wise annotations for non-stationary objects. The point clouds are filtered using the corresponding intensity images with labeled pixels. Finally, we demonstrate that the detection of non-stationary objects using our approach improves the localization results and map consistency in a laser-based SLAM system.

2013 ◽  
pp. 129-138
Author(s):  
José García-Rodríguez ◽  
Juan Manuel García-Chamizo ◽  
Sergio Orts-Escolano ◽  
Vicente Morell-Gimenez ◽  
José Antonio Serra-Pérez ◽  
...  

This chapter aims to address the ability of self-organizing neural network models to manage video and image processing in real-time. The Growing Neural Gas networks (GNG) with its attributes of growth, flexibility, rapid adaptation, and excellent quality representation of the input space makes it a suitable model for real time applications. A number of applications are presented, including: image compression, hand and medical image contours representation, surveillance systems, hand gesture recognition systems, and 3D data reconstruction.


2019 ◽  
Vol 1 (1) ◽  
pp. 450-465 ◽  
Author(s):  
Abhishek Sehgal ◽  
Nasser Kehtarnavaz

Deep learning solutions are being increasingly used in mobile applications. Although there are many open-source software tools for the development of deep learning solutions, there are no guidelines in one place in a unified manner for using these tools toward real-time deployment of these solutions on smartphones. From the variety of available deep learning tools, the most suited ones are used in this paper to enable real-time deployment of deep learning inference networks on smartphones. A uniform flow of implementation is devised for both Android and iOS smartphones. The advantage of using multi-threading to achieve or improve real-time throughputs is also showcased. A benchmarking framework consisting of accuracy, CPU/GPU consumption, and real-time throughput is considered for validation purposes. The developed deployment approach allows deep learning models to be turned into real-time smartphone apps with ease based on publicly available deep learning and smartphone software tools. This approach is applied to six popular or representative convolutional neural network models, and the validation results based on the benchmarking metrics are reported.


2020 ◽  
Vol 10 (3) ◽  
pp. 766 ◽  
Author(s):  
Alec Wright ◽  
Eero-Pekka Damskägg ◽  
Lauri Juvela ◽  
Vesa Välimäki

This article investigates the use of deep neural networks for black-box modelling of audio distortion circuits, such as guitar amplifiers and distortion pedals. Both a feedforward network, based on the WaveNet model, and a recurrent neural network model are compared. To determine a suitable hyperparameter configuration for the WaveNet, models of three popular audio distortion pedals were created: the Ibanez Tube Screamer, the Boss DS-1, and the Electro-Harmonix Big Muff Pi. It is also shown that three minutes of audio data is sufficient for training the neural network models. Real-time implementations of the neural networks were used to measure their computational load. To further validate the results, models of two valve amplifiers, the Blackstar HT-5 Metal and the Mesa Boogie 5:50 Plus, were created, and subjective tests were conducted. The listening test results show that the models of the first amplifier could be identified as different from the reference, but the sound quality of the best models was judged to be excellent. In the case of the second guitar amplifier, many listeners were unable to hear the difference between the reference signal and the signals produced with the two largest neural network models. This study demonstrates that the neural network models can convincingly emulate highly nonlinear audio distortion circuits, whilst running in real-time, with some models requiring only a relatively small amount of processing power to run on a modern desktop computer.


Electronics ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 2084
Author(s):  
Junwon Lee ◽  
Kieun Lee ◽  
Aelee Yoo ◽  
Changjoo Moon

Self-driving cars, autonomous vehicles (AVs), and connected cars combine the Internet of Things (IoT) and automobile technologies, thus contributing to the development of society. However, processing the big data generated by AVs is a challenge due to overloading issues. Additionally, near real-time/real-time IoT services play a significant role in vehicle safety. Therefore, the architecture of an IoT system that collects and processes data, and provides services for vehicle driving, is an important consideration. In this study, we propose a fog computing server model that generates a high-definition (HD) map using light detection and ranging (LiDAR) data generated from an AV. The driving vehicle edge node transmits the LiDAR point cloud information to the fog server through a wireless network. The fog server generates an HD map by applying the Normal Distribution Transform-Simultaneous Localization and Mapping(NDT-SLAM) algorithm to the point clouds transmitted from the multiple edge nodes. Subsequently, the coordinate information of the HD map generated in the sensor frame is converted to the coordinate information of the global frame and transmitted to the cloud server. Then, the cloud server creates an HD map by integrating the collected point clouds using coordinate information.


2015 ◽  
Vol 48 (6) ◽  
pp. 2043-2053 ◽  
Author(s):  
Frederico A. Limberger ◽  
Manuel M. Oliveira

Biosensors ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 188
Author(s):  
Li-Ren Yeh ◽  
Wei-Chin Chen ◽  
Hua-Yan Chan ◽  
Nan-Han Lu ◽  
Chi-Yuan Wang ◽  
...  

Anesthesia assessment is most important during surgery. Anesthesiologists use electrocardiogram (ECG) signals to assess the patient’s condition and give appropriate medications. However, it is not easy to interpret the ECG signals. Even physicians with more than 10 years of clinical experience may still misjudge. Therefore, this study uses convolutional neural networks to classify ECG image types to assist in anesthesia assessment. The research uses Internet of Things (IoT) technology to develop ECG signal measurement prototypes. At the same time, it classifies signal types through deep neural networks, divided into QRS widening, sinus rhythm, ST depression, and ST elevation. Three models, ResNet, AlexNet, and SqueezeNet, are developed with 50% of the training set and test set. Finally, the accuracy and kappa statistics of ResNet, AlexNet, and SqueezeNet in ECG waveform classification were (0.97, 0.96), (0.96, 0.95), and (0.75, 0.67), respectively. This research shows that it is feasible to measure ECG in real time through IoT and then distinguish four types through deep neural network models. In the future, more types of ECG images will be added, which can improve the real-time classification practicality of the deep model.


Sign in / Sign up

Export Citation Format

Share Document