Concepts and Real-Time Applications of Deep Learning

2021 ◽  
Sensors ◽  
2019 ◽  
Vol 19 (5) ◽  
pp. 982 ◽  
Author(s):  
Hyo Lee ◽  
Ihsan Ullah ◽  
Weiguo Wan ◽  
Yongbin Gao ◽  
Zhijun Fang

Make and model recognition (MMR) of vehicles plays an important role in automatic vision-based systems. This paper proposes a novel deep learning approach for MMR using the SqueezeNet architecture. The frontal views of vehicle images are first extracted and fed into a deep network for training and testing. The SqueezeNet architecture with bypass connections between the Fire modules, a variant of the vanilla SqueezeNet, is employed for this study, which makes our MMR system more efficient. The experimental results on our collected large-scale vehicle datasets indicate that the proposed model achieves 96.3% recognition rate at the rank-1 level with an economical time slice of 108.8 ms. For inference tasks, the deployed deep model requires less than 5 MB of space and thus has a great viability in real-time applications.


2021 ◽  
Vol 60 (10) ◽  
pp. B119
Author(s):  
Esteban Vera ◽  
Felipe Guzmán ◽  
Camilo Weinberger

Author(s):  
Paolo Russo ◽  
Fabiana Di Ciaccio ◽  
Salvatore Troisi

One of the main issues for underwater robots navigation is represented by the accurate vehicle positioning, which heavily depends on the orientation estimation phase. The systems employed to this scope are affected by different noise typologies, mainly related to the sensors and to the irregular noise of the underwater environment. Filtering algorithms can reduce their effect if opportunely configured, but this process usually requires fine techniques and time. This paper presents DANAE++, an improved denoising autoencoder based on DANAE, which is able to recover Kalman Filter IMU/AHRS orientation estimations from any kind of noise, independently of its nature. This deep learning-based architecture already proved to be robust and reliable, but in its enhanced implementation significant improvements are obtained both in terms of results and performance. In fact, DANAE++is able to denoise the three angles describing the attitude at the same time, and that is verified also on the estimations provided by the more performing Extended KF. Further tests could make this method suitable for real-time applications on navigation tasks.


Electronics ◽  
2021 ◽  
Vol 10 (24) ◽  
pp. 3079
Author(s):  
Sudhakar Sengan ◽  
Ketan Kotecha ◽  
Indragandhi Vairavasundaram ◽  
Priya Velayutham ◽  
Vijayakumar Varadarajan ◽  
...  

Statistical reports say that, from 2011 to 2021, more than 11,915 stray animals, such as cats, dogs, goats, cows, etc., and wild animals were wounded in road accidents. Most of the accidents occurred due to negligence and doziness of drivers. These issues can be handled brilliantly using stray and wild animals-vehicle interaction and the pedestrians’ awareness. This paper briefs a detailed forum on GPU-based embedded systems and ODT real-time applications. ML trains machines to recognize images more accurately than humans. This provides a unique and real-time solution using deep-learning real 3D motion-based YOLOv3 (DL-R-3D-YOLOv3) ODT of images on mobility. Besides, it discovers methods for multiple views of flexible objects using 3D reconstruction, especially for stray and wild animals. Computer vision-based IoT devices are also besieged by this DL-R-3D-YOLOv3 model. It seeks solutions by forecasting image filters to find object properties and semantics for object recognition methods leading to closed-loop ODT.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3726 ◽  
Author(s):  
Bandar Almaslukh ◽  
Abdel Artoli ◽  
Jalal Al-Muhtadi

Recently, modern smartphones equipped with a variety of embedded-sensors, such as accelerometers and gyroscopes, have been used as an alternative platform for human activity recognition (HAR), since they are cost-effective, unobtrusive and they facilitate real-time applications. However, the majority of the related works have proposed a position-dependent HAR, i.e., the target subject has to fix the smartphone in a pre-defined position. Few studies have tackled the problem of position-independent HAR. They have tackled the problem either using handcrafted features that are less influenced by the position of the smartphone or by building a position-aware HAR. The performance of these studies still needs more improvement to produce a reliable smartphone-based HAR. Thus, in this paper, we propose a deep convolution neural network model that provides a robust position-independent HAR system. We build and evaluate the performance of the proposed model using the RealWorld HAR public dataset. We find that our deep learning proposed model increases the overall performance compared to the state-of-the-art traditional machine learning method from 84% to 88% for position-independent HAR. In addition, the position detection performance of our model improves superiorly from 89% to 98%. Finally, the recognition time of the proposed model is evaluated in order to validate the applicability of the model for real-time applications.


Author(s):  
Chitra A. Dhawale ◽  
Krtika Dhawale

Artificial Intelligence (AI) is going through its golden era by playing an important role in various real-time applications. Most AI applications are using Machine learning and it represents the most promising path to strong AI. On the other hand, Deep Learning (DL), which is itself a kind of Machine Learning (ML), is becoming more and more popular and successful at different use cases, and is at the peak of developments. Hence, DL is becoming a leader in this domain. To foster the growth of the DL community to a greater extent, many open source frameworks are available which implemented DL algorithms. Each framework is based on an algorithm with specific applications. This chapter provides a brief qualitative review of the most popular and comprehensive DL frameworks, and informs end users of trends in DL Frameworks. This helps them make an informed decision to choose the best DL framework that suits their needs, resources, and applications so they choose a proper career.


2020 ◽  
Author(s):  
Corneliu Arsene

Effective and powerful methods for denoising real electrocardiogram (ECG) signals are important for wearable sensors and devices. Deep Learning (DL) models have been used extensively in image processing and other domains with great success but only very recently have been used in processing ECG signals. This paper presents several DL models namely Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM), Restricted Boltzmann Machine (RBM) together with the more conventional filtering methods (low pass filtering, high pass filtering, Notch filtering) and the standard wavelet-based technique for denoising EEG signals. These methods are trained, tested and evaluated on different synthetic and real ECG datasets taken from the MIT PhysioNet database and for different simulation conditions (i.e. various lengths of the ECG signals, single or multiple records). The results show the CNN model is a performant model that can be used for off-line denoising ECG applications where it is satisfactory to train on a clean part of an ECG signal from an ECG record, and then to test on the same ECG signal, which would have some high level of noise added to it. However, for real-time applications or near-real time applications, this task becomes more cumbersome, as the clean part of an ECG signal is very probable to be very limited in size. Therefore the solution put forth in this work is to train a CNN model on 1 second ECG noisy artificial multiple heartbeat data (i.e. ECG at effort), which was generated in a first instance based on few sequences of real signal heartbeat ECG data (i.e. ECG at rest). Afterwards it would be possible to use the trained CNN model in real life situations to denoise the ECG signal. This corresponds also to reality, where usually the human is put at rest and the ECG is recorded and then the same human is asked to do some physical exercises and the ECG is recorded at effort. The quality of results is assessed visually but also by using the Root Mean Squared (RMS) and the Signal to Noise Ratio (SNR) measures. All CNN models used an NVIDIA TITAN V Graphical Processing Unit (GPU) with 12 GB RAM, which reduces drastically the computational times. Finally, as an element of novelty, the paper presents also a Design of Experiment (DoE) study which intends to determine the optimal structure of a CNN model, which type of study has not been seen in the literature before.


Sign in / Sign up

Export Citation Format

Share Document