The technology of constructing an informative feature of a natural hyperspectral image area for the classification problem

Author(s):  
Maksimilian Khotilin
2016 ◽  
Vol 2016 ◽  
pp. 1-10
Author(s):  
Yidong Tang ◽  
Shucai Huang ◽  
Aijun Xue

The sparse representation based classifier (SRC) and its kernel version (KSRC) have been employed for hyperspectral image (HSI) classification. However, the state-of-the-art SRC often aims at extended surface objects with linear mixture in smooth scene and assumes that the number of classes is given. Considering the small target with complex background, a sparse representation based binary hypothesis (SRBBH) model is established in this paper. In this model, a query pixel is represented in two ways, which are, respectively, by background dictionary and by union dictionary. The background dictionary is composed of samples selected from the local dual concentric window centered at the query pixel. Thus, for each pixel the classification issue becomes an adaptive multiclass classification problem, where only the number of desired classes is required. Furthermore, the kernel method is employed to improve the interclass separability. In kernel space, the coding vector is obtained by using kernel-based orthogonal matching pursuit (KOMP) algorithm. Then the query pixel can be labeled by the characteristics of the coding vectors. Instead of directly using the reconstruction residuals, the different impacts the background dictionary and union dictionary have on reconstruction are used for validation and classification. It enhances the discrimination and hence improves the performance.


Author(s):  
Minchao Ye ◽  
Yongqiu Xu ◽  
Chenxi Ji ◽  
Hong Chen ◽  
Huijuan Lu ◽  
...  

Hyperspectral images (HSIs) have hundreds of narrow and adjacent spectral bands, which will result in feature redundancy, decreasing the classification accuracy. Feature (band) selection helps to remove the noisy or redundant features. Most traditional feature selection algorithms can be only performed on a single HSI scene. However, appearance of massive HSIs has placed a need for joint feature selection across different HSI scenes. Cross-scene feature selection is not a simple problem, since spectral shift exists between different HSI scenes, even though the scenes are captured by the same sensor. The spectral shift makes traditional single-dataset-based feature selection algorithms no longer applicable. To solve this problem, we extend the traditional ReliefF to a cross-domain version, namely, cross-domain ReliefF (CDRF). The proposed method can make full use of both source and target domains and increase the similarity of samples belonging to the same class in both domains. In the cross-scene classification problem, it is necessary to consider the class-separability of spectral features and the consistency of features between different scenes. The CDRF takes into account these two factors using a cross-domain updating rule of the feature weights. Experimental results on two cross-scene HSI datasets show the superiority of the proposed CDRF in cross-scene feature selection problems.


2013 ◽  
Vol 2013 ◽  
pp. 1-7 ◽  
Author(s):  
Semih Dinç ◽  
Abdullah Bal

This paper presents a novel approach for the hyperspectral imagery (HSI) classification problem, using Kernel Fukunaga-Koontz Transform (K-FKT). The Kernel based Fukunaga-Koontz Transform offers higher performance for classification problems due to its ability to solve nonlinear data distributions. K-FKT is realized in two stages: training and testing. In the training stage, unlike classical FKT, samples are relocated to the higher dimensional kernel space to obtain a transformation from non-linear distributed data to linear form. This provides a more efficient solution to hyperspectral data classification. The second stage, testing, is accomplished by employing the Fukunaga- Koontz Transformation operator to find out the classes of the real world hyperspectral images. In experiment section, the improved performance of HSI classification technique, K-FKT, has been tested comparing other methods such as the classical FKT and three types of support vector machines (SVMs).


2020 ◽  
Vol 86 (9) ◽  
pp. 581-588
Author(s):  
Mehmet Akif Günen ◽  
Umit Haluk Atasever ◽  
Erkan Beşdok

Autoencoder (<small>AE</small>)-based deep neural networks learn complex problems by generating feature-space conjugates of input data. The learning success of an AE is too sensitive for a training algorithm. The problem of hyperspectral image (<small>HSI</small>) classification by using spectral features of pixels is a highly complex problem due to its multi-dimensional and excessive data nature. In this paper, the contribution of three gradient-based training algorithms (i.e., scaled conjugate gradient (<small>SCG</small>), gradient descent (<small>GD</small>), and resilient backpropagation algorithms (<small>RP</small>)) on the solution of the HSI classification problem by using AE was analyzed. Also, it was investigated how neighborhood component analysis affects classification performance for training algorithms on HSIs. Two hyperspectral image classification benchmark data sets were used in the experimental analysis. Wilcoxon signed-rank test indicates that RB is statistically better than SCG and GD in solving the related image classification problem.


Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 811 ◽  
Author(s):  
Yang Li ◽  
Huahu Xu ◽  
Minjie Bian ◽  
Junsheng Xiao

As a result of its important role in video surveillance, pedestrian attribute recognition has become an attractive facet of computer vision research. Because of the changes in viewpoints, illumination, resolution and occlusion, the task is very challenging. In order to resolve the issue of unsatisfactory performance of existing pedestrian attribute recognition methods resulting from ignoring the correlation between pedestrian attributes and spatial information, in this paper, the task is regarded as a spatiotemporal, sequential, multi-label image classification problem. An attention-based neural network consisting of convolutional neural networks (CNN), channel attention (CAtt) and convolutional long short-term memory (ConvLSTM) is proposed (CNN-CAtt-ConvLSTM). Firstly, the salient and correlated visual features of pedestrian attributes are extracted by pre-trained CNN and CAtt. Then, ConvLSTM is used to further extract spatial information and correlations from pedestrian attributes. Finally, pedestrian attributes are predicted with optimized sequences based on attribute image area size and importance. Extensive experiments are carried out on two common pedestrian attribute datasets, PEdesTrian Attribute (PETA) dataset and Richly Annotated Pedestrian (RAP) dataset, and higher performance than other state-of-the-art (SOTA) methods is achieved, which proves the superiority and validity of our method.


2019 ◽  
Vol 29 (2) ◽  
pp. 151-176
Author(s):  
Wiame Ech-Chelfi ◽  
Hammoumi El

In this work, we develop CASVM and CANN algorithms for semi-supervised classification problem. The algorithms are based on a combination of ensemble clustering and kernel methods. A probabilistic model of classification with the use of cluster ensemble is proposed. Within the model, error probability of CANN is studied. Assumptions that make probability of error converge to zero are formulated. The proposed algorithms are experimentally tested on a hyperspectral image. It is shown that CASVM and CANN are more noise resistant than standard SVM and kNN.


2019 ◽  
Vol 29 (2) ◽  
pp. 177-192
Author(s):  
Yedilkhan Amirgaliyev ◽  
Vladimir Berikov ◽  
Lyailya Cherikbayeva ◽  
Konstantin Latuta ◽  
Kalybekuuly Bekturgan

In this work, we develop CASVM and CANN algorithms for semi-supervised classification problem. The algorithms are based on a combination of ensemble clustering and kernel methods. Probabilistic model of classification with use of cluster ensemble is proposed. Within the model, error probability of CANN is studied. Assumptions that make probability of error converge to zero are formulated. The proposed algorithms are experimentally tested on a hyperspectral image. It is shown that CASVM and CANN are more noise resistant than standard SVM and kNN.


Author(s):  
Sunitha .T ◽  
Shyamala .J ◽  
Annie Jesus Suganthi Rani.A

Data mining suggest an innovative way of prognostication stereotype of Patients health risks. Large amount of Electronic Health Records (EHRs) collected over the years have provided a rich base for risk analysis and prediction. An EHR contains digitally stored healthcare information about an individual, such as observations, laboratory tests, diagnostic reports, medications, procedures, patient identifying information and allergies. A special type of EHR is the Health Examination Records (HER) from annual general health check-ups. Identifying participants at risk based on their current and past HERs is important for early warning and preventive intervention. By “risk”, we mean unwanted outcomes such as mortality and morbidity. This approach is limited due to the classification problem and consequently it is not informative about the specific disease area in which a personal is at risk. Limited amount of data extracted from the health record is not feasible for providing the accurate risk prediction. The main motive of this project is for risk prediction to classify progressively developing situation with the majority of the data unlabeled.


Vestnik MEI ◽  
2020 ◽  
Vol 5 (5) ◽  
pp. 132-139
Author(s):  
Ivan E. Kurilenko ◽  
◽  
Igor E. Nikonov ◽  

A method for solving the problem of classifying short-text messages in the form of sentences of customers uttered in talking via the telephone line of organizations is considered. To solve this problem, a classifier was developed, which is based on using a combination of two methods: a description of the subject area in the form of a hierarchy of entities and plausible reasoning based on the case-based reasoning approach, which is actively used in artificial intelligence systems. In solving various problems of artificial intelligence-based analysis of data, these methods have shown a high degree of efficiency, scalability, and independence from data structure. As part of using the case-based reasoning approach in the classifier, it is proposed to modify the TF-IDF (Term Frequency - Inverse Document Frequency) measure of assessing the text content taking into account known information about the distribution of documents by topics. The proposed modification makes it possible to improve the classification quality in comparison with classical measures, since it takes into account the information about the distribution of words not only in a separate document or topic, but in the entire database of cases. Experimental results are presented that confirm the effectiveness of the proposed metric and the developed classifier as applied to classification of customer sentences and providing them with the necessary information depending on the classification result. The developed text classification service prototype is used as part of the voice interaction module with the user in the objective of robotizing the telephone call routing system and making a shift from interaction between the user and system by means of buttons to their interaction through voice.


Sign in / Sign up

Export Citation Format

Share Document