Spectroscopic classification of a color image by subspace method

Author(s):  
S. Toyooka
Keyword(s):  
2011 ◽  
Vol 23 (2) ◽  
pp. 121 ◽  
Author(s):  
Ezzeddine Zagrouba ◽  
Walid Barhoumi

In this work, we are motivated by the desire to classify skin lesions as malignants or benigns from color photographic slides of the lesions. Thus, we use color images of skin lesions, image processing techniques and artificial neural network classifier to distinguish melanoma from benign pigmented lesions. As the first step of the data set analysis, a preprocessing sequence is implemented to remove noise and undesired structures from the color image. Second, an automated segmentation approach localizes suspicious lesion regions by region growing after a preliminary step based on fuzzy sets. Then, we rely on quantitative image analysis to measure a series of candidate attributes hoped to contain enough information to differentiate melanomas from benign lesions. At last, the selected features are supplied to an artificial neural network for classification of tumor lesion as malignant or benign. For a preliminary balanced training/testing set, our approach is able to obtain 79.1% of correct classification of malignant and benign lesions on real skin lesion images.


Author(s):  
Jose Alvaro Luna-Gonzalez ◽  
Daniel Robles-Camarillo ◽  
Mariko Nakano-Miyatake ◽  
Humberto Lanz-Mendoza ◽  
Hector Perez-Meana

In this paper, a classification of mosquito’s specie is performed using mosquito wingbeats samples obtained by optical sensor. Six world-wide representative species of mosquitos, which are Aedes aegypti, Aedes albopictus, Anopheles arabiensis, Anopheles gambiae and Culex pipiens, Culex quinquefasciatus, are considered for classification. A total of 60,000 samples are divided equally in each specie mentioned above. In total, 25 audio feature extraction algorithms are applied to extract 39 feature values per sample. Further, each audio feature is transformed to a color image, which shows audio features presenting by different pixel values. We used a fully connected neural networks for audio features and a convolutional neural network (CNN) for image dataset generated from audio features. The CNN-based classifier shows 90.75% accuracy, which outperforms the accuracy of 87.18% obtained by the first classifier using directly audio features.


Author(s):  
Buho Hoshino ◽  
Hasi Bagan ◽  
Akihiro Nakazawa ◽  
Masami Kaneko ◽  
Masaki Kawai ◽  
...  

Author(s):  
Hicham Amakdouf ◽  
Amal Zouhri ◽  
Mostafa El Mallahi ◽  
Ahmed Tahiri ◽  
Driss Chenouni ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document