scholarly journals Vulnerable Road Users Classification of Smartphone Sensors Data Using Deep Transfer Learning

Author(s):  
Mohammed Elhenawy ◽  
Huthaifa Ashqar ◽  
Mahmoud Masoud ◽  
Mohammed Almannaa ◽  
Andry Rakotonirainy ◽  
...  

As the Autonomous Vehicle (AV) industry is rapidly advancing, classification of non-motorized (vulnerable) road users (VRUs) becomes essential to ensure their safety and to smooth operation of road applications. The typical practice of non-motorized road users’ classification usually takes numerous training time and ignores the temporal evolution and behavior of the signal. In this research effort, we attempt to detect VRUs with high accuracy be proposing a novel framework that includes using Deep Transfer Learning, which saves training time and cost, to classify images constructed from Recurrence Quantification Analysis (RQA) that reflect the temporal dynamics and behavior of the signal. Recurrence Plots (RPs) were constructed from low-power smartphone sensors without using GPS data. The resulted RPs were used as inputs for different pre-trained Convolutional Neural Network (CNN) classifiers including constructing 227×227 images to be used for AlexNet and SqueezeNet; and constructing 224×224 images to be used for VGG16 and VGG19. Results show that the classification accuracy of Convolutional Neural Network Transfer Learning (CNN-TL) reaches 98.70%, 98.62%, 98.71%, and 98.71% for AlexNet, SqueezeNet, VGG16, and VGG19, respectively. The results of the proposed framework outperform other results in the literature (to the best of our knowledge) and show that using CNN-TL is promising for VRUs classification. Because of its relative straightforwardness, ability to be generalized and transferred, and potential high accuracy, we anticipate that this framework might be able to solve various problems related to signal classification.

2020 ◽  
Vol 12 (21) ◽  
pp. 3508
Author(s):  
Mohammed Elhenawy ◽  
Huthaifa I. Ashqar ◽  
Mahmoud Masoud ◽  
Mohammed H. Almannaa ◽  
Andry Rakotonirainy ◽  
...  

As the Autonomous Vehicle (AV) industry is rapidly advancing, the classification of non-motorized (vulnerable) road users (VRUs) becomes essential to ensure their safety and to smooth operation of road applications. The typical practice of non-motorized road users’ classification usually takes significant training time and ignores the temporal evolution and behavior of the signal. In this research effort, we attempt to detect VRUs with high accuracy be proposing a novel framework that includes using Deep Transfer Learning, which saves training time and cost, to classify images constructed from Recurrence Quantification Analysis (RQA) that reflect the temporal dynamics and behavior of the signal. Recurrence Plots (RPs) were constructed from low-power smartphone sensors without using GPS data. The resulted RPs were used as inputs for different pre-trained Convolutional Neural Network (CNN) classifiers including constructing 227 × 227 images to be used for AlexNet and SqueezeNet; and constructing 224 × 224 images to be used for VGG16 and VGG19. Results show that the classification accuracy of Convolutional Neural Network Transfer Learning (CNN-TL) reaches 98.70%, 98.62%, 98.71%, and 98.71% for AlexNet, SqueezeNet, VGG16, and VGG19, respectively. Moreover, we trained resnet101 and shufflenet for a very short time using one epoch of data and then used them as weak learners, which yielded 98.49% classification accuracy. The results of the proposed framework outperform other results in the literature (to the best of our knowledge) and show that using CNN-TL is promising for VRUs classification. Because of its relative straightforwardness, ability to be generalized and transferred, and potential high accuracy, we anticipate that this framework might be able to solve various problems related to signal classification.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2540
Author(s):  
Zhipeng Yu ◽  
Jianghai Zhao ◽  
Yucheng Wang ◽  
Linglong He ◽  
Shaonan Wang

In recent years, surface electromyography (sEMG)-based human–computer interaction has been developed to improve the quality of life for people. Gesture recognition based on the instantaneous values of sEMG has the advantages of accurate prediction and low latency. However, the low generalization ability of the hand gesture recognition method limits its application to new subjects and new hand gestures, and brings a heavy training burden. For this reason, based on a convolutional neural network, a transfer learning (TL) strategy for instantaneous gesture recognition is proposed to improve the generalization performance of the target network. CapgMyo and NinaPro DB1 are used to evaluate the validity of our proposed strategy. Compared with the non-transfer learning (non-TL) strategy, our proposed strategy improves the average accuracy of new subject and new gesture recognition by 18.7% and 8.74%, respectively, when up to three repeated gestures are employed. The TL strategy reduces the training time by a factor of three. Experiments verify the transferability of spatial features and the validity of the proposed strategy in improving the recognition accuracy of new subjects and new gestures, and reducing the training burden. The proposed TL strategy provides an effective way of improving the generalization ability of the gesture recognition system.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Longzhi Zhang ◽  
Dongmei Wu

Grasp detection based on convolutional neural network has gained some achievements. However, overfitting of multilayer convolutional neural network still exists and leads to poor detection precision. To acquire high detection accuracy, a single target grasp detection network that generalizes the fitting of angle and position, based on the convolution neural network, is put forward here. The proposed network regards the image as input and grasping parameters including angle and position as output, with the detection manner of end-to-end. Particularly, preprocessing dataset is to achieve the full coverage to input of model and transfer learning is to avoid overfitting of network. Importantly, a series of experimental results indicate that, for single object grasping, our network has good detection results and high accuracy, which proves that the proposed network has strong generalization in direction and category.


Sensors ◽  
2018 ◽  
Vol 18 (7) ◽  
pp. 2399 ◽  
Author(s):  
Cunwei Sun ◽  
Yuxin Yang ◽  
Chang Wen ◽  
Kai Xie ◽  
Fangqing Wen

The convolutional neural network (CNN) has made great strides in the area of voiceprint recognition; but it needs a huge number of data samples to train a deep neural network. In practice, it is too difficult to get a large number of training samples, and it cannot achieve a better convergence state due to the limited dataset. In order to solve this question, a new method using a deep migration hybrid model is put forward, which makes it easier to realize voiceprint recognition for small samples. Firstly, it uses Transfer Learning to transfer the trained network from the big sample voiceprint dataset to our limited voiceprint dataset for the further training. Fully-connected layers of a pre-training model are replaced by restricted Boltzmann machine layers. Secondly, the approach of Data Augmentation is adopted to increase the number of voiceprint datasets. Finally, we introduce fast batch normalization algorithms to improve the speed of the network convergence and shorten the training time. Our new voiceprint recognition approach uses the TLCNN-RBM (convolutional neural network mixed restricted Boltzmann machine based on transfer learning) model, which is the deep migration hybrid model that is used to achieve an average accuracy of over 97%, which is higher than that when using either CNN or the TL-CNN network (convolutional neural network based on transfer learning). Thus, an effective method for a small sample of voiceprint recognition has been provided.


2021 ◽  
Author(s):  
Farrel Athaillah Putra ◽  
Dwi Anggun Cahyati Jamil ◽  
Briliantino Abhista Prabandanu ◽  
Suhaili Faruq ◽  
Firsta Adi Pradana ◽  
...  

2021 ◽  
pp. 1-10
Author(s):  
Gayatri Pattnaik ◽  
Vimal K. Shrivastava ◽  
K. Parvathi

Pests are major threat to economic growth of a country. Application of pesticide is the easiest way to control the pest infection. However, excessive utilization of pesticide is hazardous to environment. The recent advances in deep learning have paved the way for early detection and improved classification of pest in tomato plants which will benefit the farmers. This paper presents a comprehensive analysis of 11 state-of-the-art deep convolutional neural network (CNN) models with three configurations: transfers learning, fine-tuning and scratch learning. The training in transfer learning and fine tuning initiates from pre-trained weights whereas random weights are used in case of scratch learning. In addition, the concept of data augmentation has been explored to improve the performance. Our dataset consists of 859 tomato pest images from 10 categories. The results demonstrate that the highest classification accuracy of 94.87% has been achieved in the transfer learning approach by DenseNet201 model with data augmentation.


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Hong Zhu ◽  
Qianhao Fang ◽  
Hanzhi He ◽  
Junfeng Hu ◽  
Daihong Jiang ◽  
...  

Meningioma is the second most commonly encountered tumor type in the brain. There are three grades of meningioma by the standards of the World Health Organization. Preoperative grade prediction of meningioma is extraordinarily important for clinical treatment planning and prognosis evaluation. In this paper, we present a new deep learning model for assisting automatic prediction of meningioma grades to reduce the recurrence of meningioma. Our model is based on an improved LeNet-5 model of convolutional neural network (CNN) and does not require the extraction of the diseased tissue, which can greatly enhance the efficiency. To address the issue of insufficient and unbalanced clinical data of meningioma images, we use an oversampling technique which allows us to considerably improve the accuracy of classification. Experiments on large clinical datasets show that our model can achieve quite high accuracy (i.e., as high as 83.33%) for the classification of meningioma images.


Sign in / Sign up

Export Citation Format

Share Document