scholarly journals Autoencoder Regularized Network For Driving Style Representation Learning

Author(s):  
Weishan Dong ◽  
Ting Yuan ◽  
Kai Yang ◽  
Changsheng Li ◽  
Shilei Zhang

In this paper, we study learning generalized driving style representations from automobile GPS trip data. We propose a novel Autoencoder Regularized deep neural Network (ARNet) and a trip encoding framework trip2vec to learn drivers' driving styles directly from GPS records, by combining supervised and unsupervised feature learning in a unified architecture. Experiments on a challenging driver number estimation problem and the driver identification problem show that ARNet can learn a good generalized driving style representation: It significantly outperforms existing methods and alternative architectures by reaching the least estimation error on average (0.68, less than one driver) and the highest identification accuracy (by at least 3% improvement) compared with traditional supervised learning methods.

Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 529 ◽  
Author(s):  
Hui Zeng ◽  
Bin Yang ◽  
Xiuqing Wang ◽  
Jiwei Liu ◽  
Dongmei Fu

With the development of low-cost RGB-D (Red Green Blue-Depth) sensors, RGB-D object recognition has attracted more and more researchers’ attention in recent years. The deep learning technique has become popular in the field of image analysis and has achieved competitive results. To make full use of the effective identification information in the RGB and depth images, we propose a multi-modal deep neural network and a DS (Dempster Shafer) evidence theory based RGB-D object recognition method. First, the RGB and depth images are preprocessed and two convolutional neural networks are trained, respectively. Next, we perform multi-modal feature learning using the proposed quadruplet samples based objective function to fine-tune the network parameters. Then, two probability classification results are obtained using two sigmoid SVMs (Support Vector Machines) with the learned RGB and depth features. Finally, the DS evidence theory based decision fusion method is used for integrating the two classification results. Compared with other RGB-D object recognition methods, our proposed method adopts two fusion strategies: Multi-modal feature learning and DS decision fusion. Both the discriminative information of each modality and the correlation information between the two modalities are exploited. Extensive experimental results have validated the effectiveness of the proposed method.


2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Jiaman Ding ◽  
Qingbo Luo ◽  
Lianyin Jia ◽  
Jinguo You

With the rapid expanding of big data in all domains, data-driven and deep learning-based fault diagnosis methods in chemical industry have become a major research topic in recent years. In addition to a deep neural network, deep forest also provides a new idea for deep representation learning and overcomes the shortcomings of a deep neural network such as strong parameter dependence and large training cost. However, the ability of each base classifier is not taken into account in the standard cascade forest, which may lead to its indistinct discrimination. In this paper, a multigrained scanning-based weighted cascade forest (WCForest) is proposed and has been applied to fault diagnosis in chemical processes. In view of the high-dimensional nonlinear data in the process of chemical industry, WCForest first designs a set of relatively suitable windows for the multigrained scan strategy to learn its data representation. Next, considering the fitting quality of each forest classifier, a weighting strategy is proposed to calculate the weight of each forest in the cascade structure without additional calculation cost, so as to improve the overall performance of the model. In order to prove the effectiveness of WCForest, its application has been carried out in the benchmark Tennessee Eastman (TE) process. Experiments demonstrate that WCForest achieves better results than other related approaches across various evaluation metrics.


Sign in / Sign up

Export Citation Format

Share Document