Neural-Network-Based Feature Learning: Auto-Encoder

Author(s):  
Haitao Zhao ◽  
Zhihui Lai ◽  
Henry Leung ◽  
Xianyi Zhang
2021 ◽  
Vol 309 ◽  
pp. 01117
Author(s):  
A. Sai Hanuman ◽  
G. Prasanna Kumar

Studies on lane detection Lane identification methods, integration, and evaluation strategies square measure all examined. The system integration approaches for building a lot of strong detection systems are then evaluated and analyzed, taking into account the inherent limits of camera-based lane detecting systems. Present deep learning approaches to lane detection are inherently CNN's semantic segmentation network the results of the segmentation of the roadways and the segmentation of the lane markers are fused using a fusion method. By manipulating a huge number of frames from a continuous driving environment, we examine lane detection, and we propose a hybrid deep architecture that combines the convolution neural network (CNN) and the continuous neural network (CNN) (RNN). Because of the extensive information background and the high cost of camera equipment, a substantial number of existing results concentrate on vision-based lane recognition systems. Extensive tests on two large-scale datasets show that the planned technique outperforms rivals' lane detection strategies, particularly in challenging settings. A CNN block in particular isolates information from each frame before sending the CNN choices of several continuous frames with time-series qualities to the RNN block for feature learning and lane prediction.


Energies ◽  
2021 ◽  
Vol 14 (17) ◽  
pp. 5286 ◽  
Author(s):  
Ugochukwu Ejike Akpudo ◽  
Jang-Wook Hur

This paper develops a novel hybrid feature learner and classifier for vibration-based fault detection and isolation (FDI) of industrial apartments. The trained model extracts high-level discriminative features from vibration signals and predicts equipment state. Against the limitations of traditional machine learning (ML)-based classifiers, the convolutional neural network (CNN) and deep neural network (DNN) are not only superior for real-time applications, but they also come with other benefits including ease-of-use, automated feature learning, and higher predictive accuracies. This study proposes a hybrid DNN and one-dimensional CNN diagnostics model (D-dCNN) which automatically extracts high-level discriminative features from vibration signals for FDI. Via Softmax averaging at the output layer, the model mitigates the limitations of the standalone classifiers. A diagnostic case study demonstrates the efficiency of the model with a significant accuracy of 92% (F1 score) and extensive comparative empirical validations.


Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 529 ◽  
Author(s):  
Hui Zeng ◽  
Bin Yang ◽  
Xiuqing Wang ◽  
Jiwei Liu ◽  
Dongmei Fu

With the development of low-cost RGB-D (Red Green Blue-Depth) sensors, RGB-D object recognition has attracted more and more researchers’ attention in recent years. The deep learning technique has become popular in the field of image analysis and has achieved competitive results. To make full use of the effective identification information in the RGB and depth images, we propose a multi-modal deep neural network and a DS (Dempster Shafer) evidence theory based RGB-D object recognition method. First, the RGB and depth images are preprocessed and two convolutional neural networks are trained, respectively. Next, we perform multi-modal feature learning using the proposed quadruplet samples based objective function to fine-tune the network parameters. Then, two probability classification results are obtained using two sigmoid SVMs (Support Vector Machines) with the learned RGB and depth features. Finally, the DS evidence theory based decision fusion method is used for integrating the two classification results. Compared with other RGB-D object recognition methods, our proposed method adopts two fusion strategies: Multi-modal feature learning and DS decision fusion. Both the discriminative information of each modality and the correlation information between the two modalities are exploited. Extensive experimental results have validated the effectiveness of the proposed method.


2020 ◽  
Vol 12 (2) ◽  
pp. 1-20
Author(s):  
Sourav Das ◽  
Anup Kumar Kolya

In this work, the authors extract information on distinct baseline features from a popular open-source music corpus and explore new recognition techniques by applying unsupervised Hebbian learning techniques on our single-layer neural network using the same dataset. They show the detailed empirical findings to simulate how such an algorithm can help a single layer feedforward network in training for music feature learning as patterns. The unsupervised training algorithm enhances the proposed neural network to achieve an accuracy of 90.36% for successful music feature detection. For comparative analysis against similar tasks, they put their results with the likes of several previous benchmark works. They further discuss the limitations and thorough error analysis of the work. They hope to discover and gather new information about this particular classification technique and performance, also further understand future potential directions that could improve the art of computational music feature recognition.


2018 ◽  
Vol 8 (10) ◽  
pp. 1768 ◽  
Author(s):  
Abdelhak Belhi ◽  
Abdelaziz Bouras ◽  
Sebti Foufou

Cultural heritage represents a reliable medium for history and knowledge transfer. Cultural heritage assets are often exhibited in museums and heritage sites all over the world. However, many assets are poorly labeled, which decreases their historical value. If an asset’s history is lost, its historical value is also lost. The classification and annotation of overlooked or incomplete cultural assets increase their historical value and allows the discovery of various types of historical links. In this paper, we tackle the challenge of automatically classifying and annotating cultural heritage assets using their visual features as well as the metadata available at hand. Traditional approaches mainly rely only on image data and machine-learning-based techniques to predict missing labels. Often, visual data are not the only information available at hand. In this paper, we present a novel multimodal classification approach for cultural heritage assets that relies on a multitask neural network where a convolutional neural network (CNN) is designed for visual feature learning and a regular neural network is used for textual feature learning. These networks are merged and trained using a shared loss. The combined networks rely on both image and textual features to achieve better asset classification. Initial tests related to painting assets showed that our approach performs better than traditional CNNs that only rely on images as input.


Sign in / Sign up

Export Citation Format

Share Document