Estimation Method Based on Deep Neural Network for Consecutively Missing Sensor Data

2018 ◽  
Vol 61 (6) ◽  
pp. 258-266 ◽  
Author(s):  
Feng Liu ◽  
Huilin Li ◽  
Zhong Yang
Author(s):  
Mostafa H. Tawfeek ◽  
Karim El-Basyouny

Safety Performance Functions (SPFs) are regression models used to predict the expected number of collisions as a function of various traffic and geometric characteristics. One of the integral components in developing SPFs is the availability of accurate exposure factors, that is, annual average daily traffic (AADT). However, AADTs are not often available for minor roads at rural intersections. This study aims to develop a robust AADT estimation model using a deep neural network. A total of 1,350 rural four-legged, stop-controlled intersections from the Province of Alberta, Canada, were used to train the neural network. The results of the deep neural network model were compared with the traditional estimation method, which uses linear regression. The results indicated that the deep neural network model improved the estimation of minor roads’ AADT by 35% when compared with the traditional method. Furthermore, SPFs developed using linear regression resulted in models with statistically insignificant AADTs on minor roads. Conversely, the SPF developed using the neural network provided a better fit to the data with both AADTs on minor and major roads being statistically significant variables. The findings indicated that the proposed model could enhance the predictive power of the SPF and therefore improve the decision-making process since SPFs are used in all parts of the safety management process.


2019 ◽  
Author(s):  
◽  
Peng Sun

[ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI AT REQUEST OF AUTHOR.] With the widespread usage of many different types of sensors in recent years, large amounts of diverse and complex sensor data have been generated and analyzed to extract useful information. This dissertation focuses on two types of data: aerial images and physiological sensor data. Several new methods have been proposed based on deep learning techniques to advance the state-of-the-art in analyzing these data. For aerial images, a new method for designing effective loss functions for training deep neural networks for object detection, called adaptive salience biased loss (ASBL), has been proposed. In addition, several state-of-the-art deep neural network models for object detection, including RetinaNet, UNet, Yolo, etc., have been adapted and modified to achieve improved performance on a new set of real-world aerial images for bird detection. For physiological sensor data, a deep learning method for alcohol usage detection, called Deep ADA, has been proposed to improve the automatic detection of alcohol usage (ADA) system, which is statistical data analysis pipeline to detect drinking episodes based on wearable physiological sensor data collected from real subjects. Object detection in aerial images remains a challenging problem due to low image resolutions, complex backgrounds, and variations of sizes and orientations of objects in images. The new ASBL method has been designed for training deep neural network object detectors to achieve improved performance. ASBL can be implemented at the image level, which is called image-based ASBL, or at the anchor level, which is called anchor-based ASBL. The method computes saliency information of input images and anchors generated by deep neural network object detectors, and weights different training examples and anchors differently based on their corresponding saliency measurements. It gives complex images and difficult targets more weights during training. In our experiments using two of the largest public benchmark data sets of aerial images, DOTA and NWPU VHR-10, the existing RetinaNet was trained using ASBL to generate an one-stage detector, ASBL-RetinaNet. ASBL-RetinaNet significantly outperformed the original RetinaNet by 3.61 mAP and 12.5 mAP on the two data sets, respectively. In addition, ASBL-RetinaNet outperformed 10 other state-of-art object detection methods. To improve bird detection in aerial images, the Little Birds in Aerial Imagery (LBAI) dataset has been created from real-life aerial imagery data. LBAI contains various flocks and species of birds that are small in size, ranging from 10 by 10 pixel to 40 by 40 pixel. The dataset was labeled and further divided into two subsets, Easy and Hard, based on the complex of background. We have applied and improved some of the best deep learning models to LBAI images, including object detection techniques, such as YOLOv3, SSD, and RetinaNet, and semantic segmentation techniques, such as U-Net and Mask R-CNN. Experimental results show that RetinaNet performed the best overall, outperforming other models by 1.4 and 4.9 F1 scores on the Easy and Hard LBAI dataset, respectively. For physiological sensor data analysis, Deep ADA has been developed to extract features from physiological signals and predict alcohol usage of real subjects in their daily lives. The features extracted are using Convolutional Neural Networks without any human intervention. A large amount of unlabeled data has been used in an unsupervised learning matter to improve the quality of learned features. The method outperformed traditional feature extraction methods by up to 19% higher accuracy.


2018 ◽  
Vol 2018 (13) ◽  
pp. 194-1-194-6
Author(s):  
Koichi Taguchi ◽  
Manabu Hashimoto ◽  
Kensuke Tobitani ◽  
Noriko Nagata

Sensors ◽  
2016 ◽  
Vol 16 (10) ◽  
pp. 1695 ◽  
Author(s):  
Peng Jiang ◽  
Zhixin Hu ◽  
Jun Liu ◽  
Shanen Yu ◽  
Feng Wu

2018 ◽  
Vol 8 (11) ◽  
pp. 2180 ◽  
Author(s):  
Geon Lee ◽  
Hong Kim

This paper proposes a personalized head-related transfer function (HRTF) estimation method based on deep neural networks by using anthropometric measurements and ear images. The proposed method consists of three sub-networks for representing personalized features and estimating the HRTF. As input features for neural networks, the anthropometric measurements regarding the head and torso are used for a feedforward deep neural network (DNN), and the ear images are used for a convolutional neural network (CNN). After that, the outputs of these two sub-networks are merged into another DNN for estimation of the personalized HRTF. To evaluate the performance of the proposed method, objective and subjective evaluations are conducted. For the objective evaluation, the root mean square error (RMSE) and the log spectral distance (LSD) between the reference HRTF and the estimated one are measured. Consequently, the proposed method provides the RMSE of −18.40 dB and LSD of 4.47 dB, which are lower by 0.02 dB and higher by 0.85 dB than the DNN-based method using anthropometric data without pinna measurements, respectively. Next, a sound localization test is performed for the subjective evaluation. As a result, it is shown that the proposed method can localize sound sources with higher accuracy of around 11% and 6% than the average HRTF method and DNN-based method, respectively. In addition, the reductions of the front/back confusion rate by 12.5% and 2.5% are achieved by the proposed method, compared to the average HRTF method and DNN-based method, respectively.


Sign in / Sign up

Export Citation Format

Share Document