scholarly journals An SVM-Based Nested Sliding Window Approach for Spectral–Spatial Classification of Hyperspectral Images

2020 ◽  
Vol 13 (1) ◽  
pp. 114
Author(s):  
Jiansi Ren ◽  
Ruoxiang Wang ◽  
Gang Liu ◽  
Yuanni Wang ◽  
Wei Wu

This paper proposes a Nested Sliding Window (NSW) method based on the correlation between pixel vectors, which can extract spatial information from the hyperspectral image (HSI) and reconstruct the original data. In the NSW method, the neighbourhood window constructed with the target pixel as the centre contains relevant pixels that are spatially adjacent to the target pixel. In the neighbourhood window, a nested sliding sub-window contains the target pixel and a part of the relevant pixels. The optimal sub-window position is determined according to the average value of the Pearson correlation coefficients of the target pixel and the relevant pixels, and the target pixel can be reconstructed by using the pixels and the corresponding correlation coefficients in the optimal sub-window. By combining NSW with Principal Component Analysis (PCA) and Support Vector Machine (SVM), a classification model, namely NSW-PCA-SVM, is obtained. This paper conducts experiments on three public datasets, and verifies the effectiveness of the proposed model by comparing with two basic models, i.e., SVM and PCA-SVM, and six state-of-the-art models, i.e., CDCT-WF-SVM, CDCT-2DCT-SVM, SDWT-2DWT-SVM, SDWT-WF-SVM, SDWT-2DCT-SVM and Two-Stage. The proposed approach has the following advantages in overall accuracy (OA)—take the experimental results on the Indian Pines dataset as an example: (1) Compared with SVM (OA = 53.29%) and PCA-SVM (OA = 58.44%), NSW-PCA-SVM (OA = 91.40%) effectively utilizes the spatial information of HSI and improves the classification accuracy. (2) The performance of the proposed model is mainly determined by two parameters, i.e., the window size in NSW and the number of principal components in PCA. The two parameters can be adjusted independently, making parameter adjustment more convenient. (3) When the sample size of the training set is small (20 samples per class), the proposed NSW-PCA-SVM approach achieves 2.38–18.40% advantages in OA over the six state-of-the-art models.

2021 ◽  
pp. 1-16
Author(s):  
Ibtissem Gasmi ◽  
Mohamed Walid Azizi ◽  
Hassina Seridi-Bouchelaghem ◽  
Nabiha Azizi ◽  
Samir Brahim Belhaouari

Context-Aware Recommender System (CARS) suggests more relevant services by adapting them to the user’s specific context situation. Nevertheless, the use of many contextual factors can increase data sparsity while few context parameters fail to introduce the contextual effects in recommendations. Moreover, several CARSs are based on similarity algorithms, such as cosine and Pearson correlation coefficients. These methods are not very effective in the sparse datasets. This paper presents a context-aware model to integrate contextual factors into prediction process when there are insufficient co-rated items. The proposed algorithm uses Latent Dirichlet Allocation (LDA) to learn the latent interests of users from the textual descriptions of items. Then, it integrates both the explicit contextual factors and their degree of importance in the prediction process by introducing a weighting function. Indeed, the PSO algorithm is employed to learn and optimize weights of these features. The results on the Movielens 1 M dataset show that the proposed model can achieve an F-measure of 45.51% with precision as 68.64%. Furthermore, the enhancement in MAE and RMSE can respectively reach 41.63% and 39.69% compared with the state-of-the-art techniques.


Author(s):  
Weiwei Yang ◽  
Haifeng Song

Recent research has shown that integration of spatial information has emerged as a powerful tool in improving the classification accuracy of hyperspectral image (HSI). However, partitioning homogeneous regions of the HSI remains a challenging task. This paper proposes a novel spectral-spatial classification method inspired by the support vector machine (SVM). The model consists of spectral-spatial feature extraction channel (SSC) and SVM classifier. SSC is mainly used to extract spatial-spectral features of HSI. SVM is mainly used to classify the extracted features. The model can automatically extract the features of HSI and classify them. Experiments are conducted on benchmark HSI dataset (Indian Pines). It is found that the proposed method yields more accurate classification results compared to the state-of-the-art techniques.


2020 ◽  
Vol 23 (4) ◽  
pp. 274-284 ◽  
Author(s):  
Jingang Che ◽  
Lei Chen ◽  
Zi-Han Guo ◽  
Shuaiqun Wang ◽  
Aorigele

Background: Identification of drug-target interaction is essential in drug discovery. It is beneficial to predict unexpected therapeutic or adverse side effects of drugs. To date, several computational methods have been proposed to predict drug-target interactions because they are prompt and low-cost compared with traditional wet experiments. Methods: In this study, we investigated this problem in a different way. According to KEGG, drugs were classified into several groups based on their target proteins. A multi-label classification model was presented to assign drugs into correct target groups. To make full use of the known drug properties, five networks were constructed, each of which represented drug associations in one property. A powerful network embedding method, Mashup, was adopted to extract drug features from above-mentioned networks, based on which several machine learning algorithms, including RAndom k-labELsets (RAKEL) algorithm, Label Powerset (LP) algorithm and Support Vector Machine (SVM), were used to build the classification model. Results and Conclusion: Tenfold cross-validation yielded the accuracy of 0.839, exact match of 0.816 and hamming loss of 0.037, indicating good performance of the model. The contribution of each network was also analyzed. Furthermore, the network model with multiple networks was found to be superior to the one with a single network and classic model, indicating the superiority of the proposed model.


2021 ◽  
Vol 13 (11) ◽  
pp. 2166
Author(s):  
Xin Yang ◽  
Rui Liu ◽  
Mei Yang ◽  
Jingjue Chen ◽  
Tianqiang Liu ◽  
...  

This study proposed a new hybrid model based on the convolutional neural network (CNN) for making effective use of historical datasets and producing a reliable landslide susceptibility map. The proposed model consists of two parts; one is the extraction of landslide spatial information using two-dimensional CNN and pixel windows, and the other is to capture the correlated features among the conditioning factors using one-dimensional convolutional operations. To evaluate the validity of the proposed model, two pure CNN models and the previously used methods of random forest and a support vector machine were selected as the benchmark models. A total of 621 earthquake-triggered landslides in Ludian County, China and 14 conditioning factors derived from the topography, geological, hydrological, geophysical, land use and land cover data were used to generate a geospatial dataset. The conditioning factors were then selected and analyzed by a multicollinearity analysis and the frequency ratio method. Finally, the trained model calculated the landslide probability of each pixel in the study area and produced the resultant susceptibility map. The results indicated that the hybrid model benefitted from the features extraction capability of the CNN and achieved high-performance results in terms of the area under the receiver operating characteristic curve (AUC) and statistical indices. Moreover, the proposed model had 6.2% and 3.7% more improvement than the two pure CNN models in terms of the AUC, respectively. Therefore, the proposed model is capable of accurately mapping landslide susceptibility and providing a promising method for hazard mitigation and land use planning. Additionally, it is recommended to be applied to other areas of the world.


2014 ◽  
Vol 2014 ◽  
pp. 1-13 ◽  
Author(s):  
Xibin Wang ◽  
Junhao Wen ◽  
Shafiq Alam ◽  
Xiang Gao ◽  
Zhuo Jiang ◽  
...  

Accurate forecast of the sales growth rate plays a decisive role in determining the amount of advertising investment. In this study, we present a preclassification and later regression based method optimized by improved particle swarm optimization (IPSO) for sales growth rate forecasting. We use support vector machine (SVM) as a classification model. The nonlinear relationship in sales growth rate forecasting is efficiently represented by SVM, while IPSO is optimizing the training parameters of SVM. IPSO addresses issues of traditional PSO, such as relapsing into local optimum, slow convergence speed, and low convergence precision in the later evolution. We performed two experiments; firstly, three classic benchmark functions are used to verify the validity of the IPSO algorithm against PSO. Having shown IPSO outperform PSO in convergence speed, precision, and escaping local optima, in our second experiment, we apply IPSO to the proposed model. The sales growth rate forecasting cases are used to testify the forecasting performance of proposed model. According to the requirements and industry knowledge, the sample data was first classified to obtain types of the test samples. Next, the values of the test samples were forecast using the SVM regression algorithm. The experimental results demonstrate that the proposed model has good forecasting performance.


Author(s):  
Noha Ali ◽  
Ahmed H. AbuEl-Atta ◽  
Hala H. Zayed

<span id="docs-internal-guid-cb130a3a-7fff-3e11-ae3d-ad2310e265f8"><span>Deep learning (DL) algorithms achieved state-of-the-art performance in computer vision, speech recognition, and natural language processing (NLP). In this paper, we enhance the convolutional neural network (CNN) algorithm to classify cancer articles according to cancer hallmarks. The model implements a recent word embedding technique in the embedding layer. This technique uses the concept of distributed phrase representation and multi-word phrases embedding. The proposed model enhances the performance of the existing model used for biomedical text classification. The result of the proposed model overcomes the previous model by achieving an F-score equal to 83.87% using an unsupervised technique that trained on PubMed abstracts called PMC vectors (PMCVec) embedding. Also, we made another experiment on the same dataset using the recurrent neural network (RNN) algorithm with two different word embeddings Google news and PMCVec which achieving F-score equal to 74.9% and 76.26%, respectively.</span></span>


Author(s):  
Rizwan Aqeel ◽  
Saif Ur Rehman ◽  
Saira Gillani ◽  
Sohail Asghar

This chapter focuses on an Autonomous Ground Vehicle (AGV), also known as intelligent vehicle, which is a vehicle that can navigate without human supervision. AGV navigation over an unstructured road is a challenging task and is known research problem. This chapter is to detect road area from an unstructured environment by applying a proposed classification model. The Proposed model is sub divided into three stages: (1) - preprocessing has been performed in the initial stage; (2) - road area clustering has been done in the second stage; (3) - Finally, road pixel classification has been achieved. Furthermore, combination of classification as well as clustering is used in achieving our goals. K-means clustering algorithm is used to discover biggest cluster from road scene, second big cluster area has been classified as road or non road by using the well-known technique support vector machine. The Proposed approach is validated from extensive experiments carried out on RGB dataset, which shows that the successful detection of road area and is robust against diverse road conditions such as unstructured nature, different weather and lightening variations.


Sensors ◽  
2018 ◽  
Vol 19 (1) ◽  
pp. 97 ◽  
Author(s):  
Siddharth Chaudhary ◽  
Sarawut Ninsawat ◽  
Tai Nakamura

The aim of this study was to investigate the potential of the non-destructive hyperspectral imaging system (HSI) and accuracy of the model developed using Support Vector Machine (SVM) for determining trace detection of explosives. Raman spectroscopy has been used in similar studies, but no study has been published which is based on measurement of reflectance from hyperspectral sensor for trace detection of explosives. HSI used in this study has an advantage over existing techniques due to its combination of imaging system and spectroscopy, along with being contactless and non-destructive in nature. Hyperspectral images of the chemical were collected using the BaySpec hyperspectral sensor which operated in the spectral range of 400–1000 nm (144 bands). Image processing was applied on the acquired hyperspectral image to select the region of interest (ROI) and to extract the spectral reflectance of the chemicals which were stored as spectral library. Principal Component Analysis (PCA) and first derivative was applied to reduce the high dimensionality of the image and to determine the optimal wavelengths between 400 and 1000 nm. In total, 22 out of 144 wavelengths were selected by analysing the loadings of principal components (PC). SVM was used to develop the classification model. SVM model established on the whole spectrum from 400 to 1000 nm achieved an accuracy of 81.11%, whereas an accuracy of 77.17% with less computational load was achieved when SVM model was established on the optimal wavelengths selected. The results of the study demonstrate that the hyperspectral imaging system along with SVM is a promising tool for trace detection of explosives.


Author(s):  
M. C. Girish Baabu ◽  
Padma M. C.

<span>Hyperspectral imaging (HSI) is composed of several hundred of narrow bands (NB) with high spectral correlation and is widely used in crop classification; thus induces time and space complexity, resulting in high computational overhead and Hughes phenomenon in processing these images. Dimensional reduction technique such as band selection and feature extraction plays an important part in enhancing performance of hyperspectral image classification. However, existing method are not efficient when put forth in noisy and mixed pixel environment with dynamic illumination and climatic condition. Here the proposed Sematic Feature Representation based HSI (SFR-HSI) crop classification method first employ Image Fusion (IF) method for finding meaningful features from raw HSI spectrally. Second, to extract inherent features that keeps spatially meaningful representation of different crops by eliminating shading elements. Then, the meaningful feature set are used for training using Support vector machine (SVM). Experiment outcome shows proposed HSI crop classification model achieves much better accuracies and Kappa coefficient performance. </span>


Due to the highly variant face geometry and appearances, Facial Expression Recognition (FER) is still a challenging problem. CNN can characterize 2-D signals. Therefore, for emotion recognition in a video, the authors propose a feature selection model in AlexNet architecture to extract and filter facial features automatically. Similarly, for emotion recognition in audio, the authors use a deep LSTM-RNN. Finally, they propose a probabilistic model for the fusion of audio and visual models using facial features and speech of a subject. The model combines all the extracted features and use them to train the linear SVM (Support Vector Machine) classifiers. The proposed model outperforms the other existing models and achieves state-of-the-art performance for audio, visual and fusion models. The model classifies the seven known facial expressions, namely anger, happy, surprise, fear, disgust, sad, and neutral on the eNTERFACE’05 dataset with an overall accuracy of 76.61%.


Sign in / Sign up

Export Citation Format

Share Document