scholarly journals 2-Channel convolutional 3D deep neural network (2CC3D) for fMRI analysis: ASD classification and feature learning

Author(s):  
Xiaoxiao Li ◽  
Nicha C. Dvornek ◽  
Xenophon Papademetris ◽  
Juntang Zhuang ◽  
Lawrence H. Staib ◽  
...  
Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 529 ◽  
Author(s):  
Hui Zeng ◽  
Bin Yang ◽  
Xiuqing Wang ◽  
Jiwei Liu ◽  
Dongmei Fu

With the development of low-cost RGB-D (Red Green Blue-Depth) sensors, RGB-D object recognition has attracted more and more researchers’ attention in recent years. The deep learning technique has become popular in the field of image analysis and has achieved competitive results. To make full use of the effective identification information in the RGB and depth images, we propose a multi-modal deep neural network and a DS (Dempster Shafer) evidence theory based RGB-D object recognition method. First, the RGB and depth images are preprocessed and two convolutional neural networks are trained, respectively. Next, we perform multi-modal feature learning using the proposed quadruplet samples based objective function to fine-tune the network parameters. Then, two probability classification results are obtained using two sigmoid SVMs (Support Vector Machines) with the learned RGB and depth features. Finally, the DS evidence theory based decision fusion method is used for integrating the two classification results. Compared with other RGB-D object recognition methods, our proposed method adopts two fusion strategies: Multi-modal feature learning and DS decision fusion. Both the discriminative information of each modality and the correlation information between the two modalities are exploited. Extensive experimental results have validated the effectiveness of the proposed method.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Yajing Zhou ◽  
Yuemin Zheng ◽  
Jin Tao ◽  
Mingwei Sun ◽  
Qinglin Sun ◽  
...  

2021 ◽  
Author(s):  
Xiali Li ◽  
Zhengyu Lv ◽  
Bo Liu ◽  
Licheng Wu ◽  
Zheng Wang

Author(s):  
Weishan Dong ◽  
Ting Yuan ◽  
Kai Yang ◽  
Changsheng Li ◽  
Shilei Zhang

In this paper, we study learning generalized driving style representations from automobile GPS trip data. We propose a novel Autoencoder Regularized deep neural Network (ARNet) and a trip encoding framework trip2vec to learn drivers' driving styles directly from GPS records, by combining supervised and unsupervised feature learning in a unified architecture. Experiments on a challenging driver number estimation problem and the driver identification problem show that ARNet can learn a good generalized driving style representation: It significantly outperforms existing methods and alternative architectures by reaching the least estimation error on average (0.68, less than one driver) and the highest identification accuracy (by at least 3% improvement) compared with traditional supervised learning methods.


Sign in / Sign up

Export Citation Format

Share Document