Framework for Automatic Selection of Kernels based on Convolutional Neural Networks and CkMeans Clustering Algorithm

2019 ◽  
Vol 19 (04) ◽  
pp. 1950019 ◽  
Author(s):  
Maissa Hamouda ◽  
Karim Saheb Ettabaa ◽  
Med Salim Bouhlel

Convolutional neural networks (CNN) can learn deep feature representation for hyperspectral imagery (HSI) interpretation and attain excellent accuracy of classification if we have many training samples. Due to its superiority in feature representation, several works focus on it, among which a reliable classification approach based on CNN, used filters generated from cluster framework, like k Means algorithm, yielded good results. However, the kernels number to be manually assigned. To solve this problem, a HSI classification framework based on CNN, where the convolutional filters to be adaptatively learned from the data, by grouping without knowing the cluster number, has recently proposed. This framework, based on the two algorithms CNN and kMeans, showed high accuracy results. So, in the same context, we propose an architecture based on the depth convolution al neural networks principle, where kernels are adaptatively learned, using CkMeans network, to generate filters without knowing the number of clusters, for hyperspectral classification. With adaptive kernels, the proposed framework automatic kernels selection by CkMeans algorithm (AKSCCk) achieves a better classification accuracy compared to the previous frameworks. The experimental results show the effectiveness and feasibility of AKSCCk approach.

Symmetry ◽  
2020 ◽  
Vol 12 (3) ◽  
pp. 427 ◽  
Author(s):  
Sanxing Zhang ◽  
Zhenhuan Ma ◽  
Gang Zhang ◽  
Tao Lei ◽  
Rui Zhang ◽  
...  

Semantic image segmentation, as one of the most popular tasks in computer vision, has been widely used in autonomous driving, robotics and other fields. Currently, deep convolutional neural networks (DCNNs) are driving major advances in semantic segmentation due to their powerful feature representation. However, DCNNs extract high-level feature representations by strided convolution, which makes it impossible to segment foreground objects precisely, especially when locating object boundaries. This paper presents a novel semantic segmentation algorithm with DeepLab v3+ and super-pixel segmentation algorithm-quick shift. DeepLab v3+ is employed to generate a class-indexed score map for the input image. Quick shift is applied to segment the input image into superpixels. Outputs of them are then fed into a class voting module to refine the semantic segmentation results. Extensive experiments on proposed semantic image segmentation are performed over PASCAL VOC 2012 dataset, and results that the proposed method can provide a more efficient solution.


2019 ◽  
Vol 491 (2) ◽  
pp. 2280-2300 ◽  
Author(s):  
Kaushal Sharma ◽  
Ajit Kembhavi ◽  
Aniruddha Kembhavi ◽  
T Sivarani ◽  
Sheelu Abraham ◽  
...  

ABSTRACT Due to the ever-expanding volume of observed spectroscopic data from surveys such as SDSS and LAMOST, it has become important to apply artificial intelligence (AI) techniques for analysing stellar spectra to solve spectral classification and regression problems like the determination of stellar atmospheric parameters Teff, $\rm {\log g}$, and [Fe/H]. We propose an automated approach for the classification of stellar spectra in the optical region using convolutional neural networks (CNNs). Traditional machine learning (ML) methods with ‘shallow’ architecture (usually up to two hidden layers) have been trained for these purposes in the past. However, deep learning methods with a larger number of hidden layers allow the use of finer details in the spectrum which results in improved accuracy and better generalization. Studying finer spectral signatures also enables us to determine accurate differential stellar parameters and find rare objects. We examine various machine and deep learning algorithms like artificial neural networks, Random Forest, and CNN to classify stellar spectra using the Jacoby Atlas, ELODIE, and MILES spectral libraries as training samples. We test the performance of the trained networks on the Indo-U.S. Library of Coudé Feed Stellar Spectra (CFLIB). We show that using CNNs, we are able to lower the error up to 1.23 spectral subclasses as compared to that of two subclasses achieved in the past studies with ML approach. We further apply the trained model to classify stellar spectra retrieved from the SDSS data base with SNR > 20.


Author(s):  
D. Wittich ◽  
F. Rottensteiner

<p><strong>Abstract.</strong> Domain adaptation (DA) can drastically decrease the amount of training data needed to obtain good classification models by leveraging available data from a source domain for the classification of a new (target) domains. In this paper, we address deep DA, i.e. DA with deep convolutional neural networks (CNN), a problem that has not been addressed frequently in remote sensing. We present a new method for semi-supervised DA for the task of pixel-based classification by a CNN. After proposing an encoder-decoder-based fully convolutional neural network (FCN), we adapt a method for adversarial discriminative DA to be applicable to the pixel-based classification of remotely sensed data based on this network. It tries to learn a feature representation that is domain invariant; domain-invariance is measured by a classifier’s incapability of predicting from which domain a sample was generated. We evaluate our FCN on the ISPRS labelling challenge, showing that it is close to the best-performing models. DA is evaluated on the basis of three domains. We compare different network configurations and perform the representation transfer at different layers of the network. We show that when using a proper layer for adaptation, our method achieves a positive transfer and thus an improved classification accuracy in the target domain for all evaluated combinations of source and target domains.</p>


Energies ◽  
2020 ◽  
Vol 13 (13) ◽  
pp. 3374 ◽  
Author(s):  
Anthony Faustine ◽  
Lucas Pereira

Appliance recognition is one of the vital sub-tasks of NILM in which a machine learning classier is used to detect and recognize active appliances from power measurements. The performance of the appliance classifier highly depends on the signal features used to characterize the loads. Recently, different appliance features derived from the voltage–current (V–I) waveforms have been extensively used to describe appliances. However, the performance of V–I-based approaches is still unsatisfactory as it is still not distinctive enough to recognize devices that fall into the same category. Instead, we propose an appliance recognition method utilizing the recurrence graph (RG) technique and convolutional neural networks (CNNs). We introduce the weighted recurrent graph (WRG) generation that, given one-cycle current and voltage, produces an image-like representation with more values than the binary output created by RG. Experimental results on three different sub-metered datasets show that the proposed WRG-based image representation provides superior feature representation and, therefore, improves classification performance compared to V–I-based features.


Sign in / Sign up

Export Citation Format

Share Document