Feature Extraction of Melanoma Data using Machine Learning Techniques

2021 ◽  
pp. 5352-5360
Author(s):  
R.Veeralakshmi, Dr.K.Merriliance

In our body the skin is the largest organ, it protects from injury, infection and also helps to maintain the temperature of the body. Melanoma Skin cancer is one of the most dangerous skin diseases and it is caused by an uncontrolled growth of abnormal skin cells, by ultraviolet radiation from sunshine. Melanoma is more common among white skins such as Americans than in darker skins. The digital lesion images have been analyzed based on image acquisition, pre-processing, and image segmentation technique. The image segmentation technique is applied to easily identify the affected portion in the skin input image. The images are enhanced using morphological filters and sharpen region of interest in an image, enhancement method enhanced the non-uniform background illumination and converts the input image into a binary image too easy to identify foreground objects. The mole of melanoma is segmented from the background using Active Contour algorithm. After that, the feature extraction methods such as Kernel PCA, SIFT are used to extract melanoma affected area in an image based on their intensity and texture features.

2020 ◽  
pp. 17-23
Author(s):  
Neeraj Kumari ◽  
Ashutosh Kumar Bhatt ◽  
Rakesh Kumar Dwivedi ◽  
Rajendra Belwal

Image segmentation is an essential and critical step in huge number of applications of image processing. Accuracy of image segmentation influence retrieved information for further processing in classification and other task. In image segmentation algorithms, a single segmentation technique is not sufficient in providing accurate segmentation results in many cases. In this paper we are proposing a combining approach of image segmentation techniques for improving segmentation accuracy. As a case study fruit mango is selected for classification based on surface defect. This classification method consists of three steps: (a) image pre-processing, (b) feature extraction and feature selection and (c) classification of mango. Feature extraction phase is performed on an enhanced input image. In feature selection PCA methodology is used. In classification three classifiers BPNN, Naïve bayes and LDA are used. Proposed image segmentation technique is tested on online dataset and our own collected images database. Proposed segmentation technique performance is compared with existing segmentation techniques. Classification results of BPNN in training and testing phase are acceptable for proposed segmentation technique.


Author(s):  
Daniel Reska ◽  
Marek Kretowski

Abstract In this paper, we present a fast multi-stage image segmentation method that incorporates texture analysis into a level set-based active contour framework. This approach allows integrating multiple feature extraction methods and is not tied to any specific texture descriptors. Prior knowledge of the image patterns is also not required. The method starts with an initial feature extraction and selection, then performs a fast level set-based evolution process and ends with a final refinement stage that integrates a region-based model. The presented implementation employs a set of features based on Grey Level Co-occurrence Matrices, Gabor filters and structure tensors. The high performance of feature extraction and contour evolution stages is achieved with GPU acceleration. The method is validated on synthetic and natural images and confronted with results of the most similar among the accessible algorithms.


Author(s):  
DOMENEC PUIG ◽  
MIGUEL ANGEL GARCIA

This paper presents a pixel-based texture classifier oriented to the identification of texture models that can be present in an input image, given a set of models known in advance. The proposed methodology is based on the integration of texture features generated by texture methods that belong to different families, which are evaluated over multiple windows of different sizes. This is a novelty with respect to the current texture classifiers, which are based on specific families of texture methods evaluated over single windows of a size defined empirically. Experiments show that this integration strategy produces better results than classical texture classifiers based on specific families of texture methods.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Fanlin Shen ◽  
Siyi Cheng ◽  
Zhu Li ◽  
Keqiang Yue ◽  
Wenjun Li ◽  
...  

Obstructive sleep apnea-hypopnea syndrome (OSAHS) is extremely harmful to the human body and may cause neurological dysfunction and endocrine dysfunction, resulting in damage to multiple organs and multiple systems throughout the body and negatively affecting the cardiovascular, kidney, and mental systems. Clinically, doctors usually use standard PSG (Polysomnography) to assist diagnosis. PSG determines whether a person has apnea syndrome with multidimensional data such as brain waves, heart rate, and blood oxygen saturation. In this paper, we have presented a method of recognizing OSAHS, which is convenient for patients to monitor themselves in daily life to avoid delayed treatment. Firstly, we theoretically analyzed the difference between the snoring sounds of normal people and OSAHS patients in the time and frequency domains. Secondly, the snoring sounds related to apnea events and the nonapnea related snoring sounds were classified by deep learning, and then, the severity of OSAHS symptoms had been recognized. In the algorithm proposed in this paper, the snoring data features are extracted through the three feature extraction methods, which are MFCC, LPCC, and LPMFCC. Moreover, we adopted CNN and LSTM for classification. The experimental results show that the MFCC feature extraction method and the LSTM model have the highest accuracy rate which was 87% when it is adopted for binary-classification of snoring data. Moreover, the AHI value of the patient can be obtained by the algorithm system which can determine the severity degree of OSAHS.


2020 ◽  
Vol 17 (8) ◽  
pp. 3453-3457
Author(s):  
Chinka Siva Gopi ◽  
Chidipudi Sivareddy ◽  
K. Mohana Prasad ◽  
R. Sabitha ◽  
K. Ashok Kumar

Cancer is a risky disease which could affect the particular area in depth and may risk the body parts. Now a days, more females are subject to breast cancers. So that Machine Learning Techniques has proposed to analyze the risky area in which the information is utilized for forecasting additional incidents. Machine Learning is popular scheme within several programs one remaining healthcare evaluation. Image Classification as well as feature extraction will bring the affected area’s image into several analyzing methods. With this proposed system, we’ve suggested an CNN (Convolution Neural Network) active design which fetches a sequence of pictures coming from a healthcare scanner repository so that the pictures are preprocessed as well as additional segmented feature extraction. The effectiveness on the suggested design is examined and it is as opposed along with other Machine Learning procedures and it is found the proposed system has supplies the greater results. The functionality on the unit tends to be more precise as the unit has an iterative method for include removal inside classifying pictures. There are some images are kept for the training and testing. We have achieved the accuracy level of comparing with existing model.


2013 ◽  
Vol 09 (01) ◽  
pp. 1250005
Author(s):  
A. SINDHUJA ◽  
V. SADASIVAM

Breast cancer is the leading cause of death in women. Early detection and early treatment can significantly reduce the breast cancer mortality. Texture features are widely used in classification problems, i.e., mainly for diagnostic purposes where the region of interest is delineated manually. It has not yet been considered for sonoelastographic segmentation. This paper proposes a method of segmenting the sonoelastographic breast images with optimum number of features from 32 features extracted from three different extraction methods: Gray Level Co-occurrence Matrix (GLCM), Local Binary Pattern (LBP), and Edge-Based Features. The image undergoes preprocessing by Sticks filter that improves the contrast and enhances the edges and emphasizes the tumor boundary. The features are extracted and then ranked according to the Sequential Forward Floating Selection (SFFS). The optimum number of ranked features is used for segmentation using k-means clustering. The segmented images are subjected to morphological processing that marks the tumor boundary. The overall accuracy is studied to investigate the effect of automated segmentation where the subset of first 10 ranked features provides an accuracy of 79%. The combined metric of overlap, over- and under-segmentation is 90%. The proposed work can also be considered for diagnostic purposes, along with the sonographic breast images.


2021 ◽  
Vol 14 (2) ◽  
pp. 48-66
Author(s):  
Sneha Kugunavar ◽  
Prabhakar C. J.

This article presents a novel technique for retrieval of lung images from the collection of medical CT images. The proposed content-based medical image retrieval (CBMIR) technique uses an automated image segmentation technique called Delaunay triangulation (DT) in order to segment lung organ (region of interest) from the original medical image. The proposed method extracts novel and discriminant features from the segmented lung region instead of extracting novel features from the whole original image. For the extraction of shape features, the authors employ edge histogram descriptor (EHD) and geometric moments (GM), and for the extraction of texture features, the authors use gray-level co-occurrence matrix (GLCM) technique. The shape and texture features are combined to form the hybrid feature which is used for retrieval of similar lung images. The proposed method is evaluated using two benchmark datasets of lung CT images. The simulation results prove that the proposed CBMIR framework shows improved performance in terms of retrieval accuracy and retrieval time.


2021 ◽  
Author(s):  
Ying Bi ◽  
Mengjie Zhang ◽  
Bing Xue

© 2018 IEEE. Feature extraction is an essential process to image classification. Existing feature extraction methods can extract important and discriminative image features but often require domain expert and human intervention. Genetic Programming (GP) can automatically extract features which are more adaptive to different image classification tasks. However, the majority GP-based methods only extract relatively simple features of one type i.e. local or global, which are not effective and efficient for complex image classification. In this paper, a new GP method (GP-GLF) is proposed to achieve automatically and simultaneously global and local feature extraction to image classification. To extract discriminative image features, several effective and well-known feature extraction methods, such as HOG, SIFT and LBP, are employed as GP functions in global and local scenarios. A novel program structure is developed to allow GP-GLF to evolve descriptors that can synthesise feature vectors from the input image and the automatically detected regions using these functions. The performance of the proposed method is evaluated on four different image classification data sets of varying difficulty and compared with seven GP based methods and a set of non-GP methods. Experimental results show that the proposed method achieves significantly better or similar performance than almost all the peer methods. Further analysis on the evolved programs shows the good interpretability of the GP-GLF method.


Recognition of human emotions is a fascinating research field that motivates many researchers to use various approaches, such as facial expression, speech or gesture of the body. Electroencephalogram (EEG) is another approach of recognizing human emotion through brain signals and has offered promising findings. Although EEG signals provide detail information on human emotional states, the analysis of non-linear and chaotic characteristics of EEG signals is a substantial problem. The main challenge remains in analyzing EEG signals to extract relevant features in order to achieve optimum classification performance. Various feature extraction methods have been developed by researchers, which mainly can be categorized under time, frequency or time-frequency based feature extraction methods. Yet, there are numerous setting that could affect the performance of any model. In this paper, we investigated the performance of Discrete Wavelet Transform (DWT) and Discrete Wavelet Packet Transform (DWPT), which are time-frequency domain methods using Support Vector Machine (SVM) and k-Nearest Neighbor (KNN) classification techniques. Different SVM kernel functions and distance metrics of KNN are tested in this study by using subject-dependent and subject -independent approaches. The experiment is implemented using publicly available DEAP dataset. The experimental results show that DWT is mostly suitable with weighted KNN classifier while DWPT reported better results when tested using Linear SVM classifier to accurately classify the EEG signals on subject-dependent approach. Consistent results are observed for DWT-KNN on subject-independent approach, however SVM works better in the setting of quadratic kernel functions. These results indicate that further investigation is significant to examine the impact of different setting of methods in analyzing large scale of EEG data


Sign in / Sign up

Export Citation Format

Share Document