scholarly journals VESSEL CLASSIFICATION IN COSMO-SKYMED SAR DATA USING HIERARCHICAL FEATURE SELECTION

Author(s):  
A. Makedonas ◽  
C. Theoharatos ◽  
V. Tsagaris ◽  
V. Anastasopoulos ◽  
S. Costicoglou

SAR based ship detection and classification are important elements of maritime monitoring applications. Recently, high-resolution SAR data have opened new possibilities to researchers for achieving improved classification results. In this work, a hierarchical vessel classification procedure is presented based on a robust feature extraction and selection scheme that utilizes scale, shape and texture features in a hierarchical way. Initially, different types of feature extraction algorithms are implemented in order to form the utilized feature pool, able to represent the structure, material, orientation and other vessel type characteristics. A two-stage hierarchical feature selection algorithm is utilized next in order to be able to discriminate effectively civilian vessels into three distinct types, in COSMO-SkyMed SAR images: cargos, small ships and tankers. In our analysis, scale and shape features are utilized in order to discriminate smaller types of vessels present in the available SAR data, or shape specific vessels. Then, the most informative texture and intensity features are incorporated in order to be able to better distinguish the civilian types with high accuracy. A feature selection procedure that utilizes heuristic measures based on features’ statistical characteristics, followed by an exhaustive research with feature sets formed by the most qualified features is carried out, in order to discriminate the most appropriate combination of features for the final classification. In our analysis, five COSMO-SkyMed SAR data with 2.2m x 2.2m resolution were used to analyse the detailed characteristics of these types of ships. A total of 111 ships with available AIS data were used in the classification process. The experimental results show that this method has good performance in ship classification, with an overall accuracy reaching 83%. Further investigation of additional features and proper feature selection is currently in progress.

Author(s):  
Daniel Reska ◽  
Marek Kretowski

Abstract In this paper, we present a fast multi-stage image segmentation method that incorporates texture analysis into a level set-based active contour framework. This approach allows integrating multiple feature extraction methods and is not tied to any specific texture descriptors. Prior knowledge of the image patterns is also not required. The method starts with an initial feature extraction and selection, then performs a fast level set-based evolution process and ends with a final refinement stage that integrates a region-based model. The presented implementation employs a set of features based on Grey Level Co-occurrence Matrices, Gabor filters and structure tensors. The high performance of feature extraction and contour evolution stages is achieved with GPU acceleration. The method is validated on synthetic and natural images and confronted with results of the most similar among the accessible algorithms.


2016 ◽  
Vol 2016 ◽  
pp. 1-13
Author(s):  
Peng-yuan Liu ◽  
Bing Li ◽  
Cui-e Han ◽  
Feng Wang

A novel feature extraction and selection scheme is presented for intelligent engine fault diagnosis by utilizing two-dimensional nonnegative matrix factorization (2DNMF), mutual information, and nondominated sorting genetic algorithms II (NSGA-II). Experiments are conducted on an engine test rig, in which eight different engine operating conditions including one normal condition and seven fault conditions are simulated, to evaluate the presented feature extraction and selection scheme. In the phase of feature extraction, theStransform technique is firstly utilized to convert the engine vibration signals to time-frequency domain, which can provide richer information on engine operating conditions. Then a novel feature extraction technique, named two-dimensional nonnegative matrix factorization, is employed for characterizing the time-frequency representations. In the feature selection phase, a hybrid filter and wrapper scheme based on mutual information and NSGA-II is utilized to acquire a compact feature subset for engine fault diagnosis. Experimental results by adopted three different classifiers have demonstrated that the proposed feature extraction and selection scheme can achieve a very satisfying classification performance with fewer features for engine fault diagnosis.


2019 ◽  
Vol 5 ◽  
pp. e237 ◽  
Author(s):  
Davide Nardone ◽  
Angelo Ciaramella ◽  
Antonino Staiano

In this work, we propose a novel Feature Selection framework called Sparse-Modeling Based Approach for Class Specific Feature Selection (SMBA-CSFS), that simultaneously exploits the idea of Sparse Modeling and Class-Specific Feature Selection. Feature selection plays a key role in several fields (e.g., computational biology), making it possible to treat models with fewer variables which, in turn, are easier to explain, by providing valuable insights on the importance of their role, and likely speeding up the experimental validation. Unfortunately, also corroborated by the no free lunch theorems, none of the approaches in literature is the most apt to detect the optimal feature subset for building a final model, thus it still represents a challenge. The proposed feature selection procedure conceives a two-step approach: (a) a sparse modeling-based learning technique is first used to find the best subset of features, for each class of a training set; (b) the discovered feature subsets are then fed to a class-specific feature selection scheme, in order to assess the effectiveness of the selected features in classification tasks. To this end, an ensemble of classifiers is built, where each classifier is trained on its own feature subset discovered in the previous phase, and a proper decision rule is adopted to compute the ensemble responses. In order to evaluate the performance of the proposed method, extensive experiments have been performed on publicly available datasets, in particular belonging to the computational biology field where feature selection is indispensable: the acute lymphoblastic leukemia and acute myeloid leukemia, the human carcinomas, the human lung carcinomas, the diffuse large B-cell lymphoma, and the malignant glioma. SMBA-CSFS is able to identify/retrieve the most representative features that maximize the classification accuracy. With top 20 and 80 features, SMBA-CSFS exhibits a promising performance when compared to its competitors from literature, on all considered datasets, especially those with a higher number of features. Experiments show that the proposed approach may outperform the state-of-the-art methods when the number of features is high. For this reason, the introduced approach proposes itself for selection and classification of data with a large number of features and classes.


Author(s):  
Marwa Ben Salah ◽  
Ameni Yengui ◽  
Mahmoud Neji

In this paper, we present two steps in the process of automatic annotation in archeological images. These steps are feature extraction and feature selection. We focus our research on archeological images which are very much studied in our days. It presents the most important steps in the process of automatic annotation in an image. Feature extraction techniques are applied to get the feature that will be used in classifying and recognizing the images. Also, the selection of characteristics reduces the number of unattractive characteristics. However, we reviewed various images of feature extraction techniques to analyze the archaeological images. Each feature represents one or more feature descriptors in the archeological images. We focus on the descriptor shape of the archaeological objects extraction in the images using contour method-based shape recognition of the monuments. So, the feature selection stage serves to acquire the most interesting characteristics to improve the accuracy of the classification. In the feature selection section, we present a comparative study between feature selection techniques. Then we give our proposal of application of methods of selection of the characteristics of the archaeological images. Finally, we calculate the performance of two steps already mentioned: the extraction of characteristics and the selection of characteristics.


2011 ◽  
Vol 38 (8) ◽  
pp. 10000-10009 ◽  
Author(s):  
Bing Li ◽  
Pei-lin Zhang ◽  
Hao Tian ◽  
Shuang-shan Mi ◽  
Dong-sheng Liu ◽  
...  

2018 ◽  
Vol 2018 ◽  
pp. 1-20
Author(s):  
Xiangmin Lun ◽  
Mingxuan Wang ◽  
Zhenglin Yu ◽  
Yimin Hou

To discover the influence of the commercial videos’ low-level features on the popularity of the videos, the feature selection method should be used to get the video features influencing the videos’ evaluation mostly after analyzing the source data and the audiences’ evaluations of the videos. After extracting the low-level features of the videos, this paper improved the Correlation-Based Feature Selection (CFS) method which is widely used and proposed an algorithm named CFS-Spearmen which combined the Spearmen correlation coefficient and the classical CFS to select features. The 4 datasets in UCI machine learning database were employed as the experiment data. The experiment results were compared with the results using traditional CFS, Minimum Redundancy and Maximum Relevance (mRMR). The SVM was used to test the method in this paper. Finally, the proposed method was used in commercial videos’ feature selection and the most influential feature set was obtained.


2019 ◽  
Author(s):  
Davide Nardone ◽  
Angelo Ciaramella ◽  
Antonino Staiano

In this work, we propose a novel Feature Selection framework, called Sparse-Modeling Based Approach for Class Specific Feature Selection (SMBA-CSFS), that simultaneously exploits the idea of Sparse Modeling and Class-Specific Feature Selection. Feature selection plays a key role in several fields (e.g., computational biology), making it possible to treat models with fewer variables which, in turn, are easier to explain, by providing valuable insights on the importance of their role, and might speed the experimental validation up. Unfortunately, also corroborated by the no free lunch theorems, none of the approaches in literature is the most apt to detect the optimal feature subset for building a final model, thus it still represents a challenge. The proposed feature selection procedure conceives a two steps approach: (a) a sparse modeling-based learning technique is first used to find the best subset of features, for each class of a training set; (b) the discovered feature subsets are then fed to a class-specific feature selection scheme, in order to assess the effectiveness of the selected features in classification tasks. To this end, an ensemble of classifiers is built, where each classifier is trained on its own feature subset discovered in the previous phase, and a proper decision rule is adopted to compute the ensemble responses. In order to evaluate the performance of the proposed method, extensive experiments have been performed on publicly available datasets, in particular belonging to the computational biology field where feature selection is indispensable: the acute lymphoblastic leukemia and acute myeloid leukemia, the human carcinomas, the human lung carcinomas, the diffuse large B-cell lymphoma, and the malignant glioma. SMBA-CSFS is able to identify/retrieve the most representative features that maximize the classification accuracy. With top 20 and 80 features, SMBA-CSFS exhibits a promising performance when compared to its competitors from literature, on all considered datasets, especially those with a higher number of features. Experiments show that the proposed approach might outperform the state-of-the-art methods when the number of features is high. For this reason, the introduced approach proposes itself for selection and classification of data with a large number of features and classes.


2017 ◽  
Vol 84 (3) ◽  
Author(s):  
Tizian Schneider ◽  
Nikolai Helwig ◽  
Andreas Schütze

AbstractThe classification of cyclically recorded time series plays an important role in measurement technologies. Example use cases range from gas sensors combined with temperature cycled operation to condition monitoring using vibration analysis. Before machine learning can be applied to high dimensional cyclical time series data dimensionality reduction has to be performed to avoid the classifier suffering from overfitting and the “curse of dimensionality”. This paper introduces a set of four complementary feature extraction methods and three feature selection algorithms that can be applied in a fully automatized manner to reduce the number of dimensions. The feature extraction algorithms are capable of extracting characteristic features from cyclical time series catching information contained in local details and overall cycle shape as well as in frequency or time-frequency domain. The methods for feature selection are capable of selecting the most suitable features for linear and nonlinear classification. The methods were chosen to be applicable to a wide range of applications which is verified by testing the set of methods on four different use cases.


2019 ◽  
Author(s):  
Davide Nardone ◽  
Angelo Ciaramella ◽  
Antonino Staiano

In this work, we propose a novel Feature Selection framework, called Sparse-Modeling Based Approach for Class Specific Feature Selection (SMBA-CSFS), that simultaneously exploits the idea of Sparse Modeling and Class-Specific Feature Selection. Feature selection plays a key role in several fields (e.g., computational biology), making it possible to treat models with fewer variables which, in turn, are easier to explain, by providing valuable insights on the importance of their role, and might speed the experimental validation up. Unfortunately, also corroborated by the no free lunch theorems, none of the approaches in literature is the most apt to detect the optimal feature subset for building a final model, thus it still represents a challenge. The proposed feature selection procedure conceives a two steps approach: (a) a sparse modeling-based learning technique is first used to find the best subset of features, for each class of a training set; (b) the discovered feature subsets are then fed to a class-specific feature selection scheme, in order to assess the effectiveness of the selected features in classification tasks. To this end, an ensemble of classifiers is built, where each classifier is trained on its own feature subset discovered in the previous phase, and a proper decision rule is adopted to compute the ensemble responses. In order to evaluate the performance of the proposed method, extensive experiments have been performed on publicly available datasets, in particular belonging to the computational biology field where feature selection is indispensable: the acute lymphoblastic leukemia and acute myeloid leukemia, the human carcinomas, the human lung carcinomas, the diffuse large B-cell lymphoma, and the malignant glioma. SMBA-CSFS is able to identify/retrieve the most representative features that maximize the classification accuracy. With top 20 and 80 features, SMBA-CSFS exhibits a promising performance when compared to its competitors from literature, on all considered datasets, especially those with a higher number of features. Experiments show that the proposed approach might outperform the state-of-the-art methods when the number of features is high. For this reason, the introduced approach proposes itself for selection and classification of data with a large number of features and classes.


Sign in / Sign up

Export Citation Format

Share Document