Classification of Mammograms Using Texture and CNN Based Extracted Features

Author(s):  
Taye Girma Debelee ◽  
Abrham Gebreselasie ◽  
Friedhelm Schwenker ◽  
Mohammadreza Amirian ◽  
Dereje Yohannes

In this paper, a modified adaptive K-means (MAKM) method is proposed to extract the region of interest (ROI) from the local and public datasets. The local image datasets are collected from Bethezata General Hospital (BGH) and the public datasets are from Mammographic Image Analysis Society (MIAS). The same image number is used for both datasets, 112 are abnormal and 208 are normal. Two texture features (GLCM and Gabor) from ROIs and one CNN based extracted features are considered in the experiment. CNN features are extracted using Inception-V3 pre-trained model after simple preprocessing and cropping. The quality of the features are evaluated individually and by fusing features to one another and five classifiers (SVM, KNN, MLP, RF, and NB) are used to measure the descriptive power of the features using cross-validation. The proposed approach was first evaluated on the local dataset and then applied to the public dataset. The results of the classifiers are measured using accuracy, sensitivity, specificity, kappa, computation time and AUC. The experimental analysis made using GLCM features from the two datasets indicates that GLCM features from BGH dataset outperformed that of MIAS dataset in all five classifiers. However, Gabor features from the two datasets scored the best result with two classifiers (SVM and MLP). For BGH and MIAS, SVM scored an accuracy of 99%, 97.46%, the sensitivity of 99.48%, 96.26% and specificity of 98.16%, 100% respectively. And MLP achieved an accuracy of 97%, 87.64%, the sensitivity of 97.40%, 96.65% and specificity of 96.26%, 75.73% respectively. Relatively maximum performance is achieved for feature fusion between Gabor and CNN based extracted features using MLP classifier. However, KNN, MLP, RF, and NB classifiers achieved almost 100% performance for GLCM texture features and SVM scored an accuracy of 96.88%, the sensitivity of 97.14% and specificity of 96.36%. As compared to other classifiers, NB has scored the least computation time in all experiments.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Gaihua Wang ◽  
Qianyu Zhai

AbstractContextual information is a key factor affecting semantic segmentation. Recently, many methods have tried to use the self-attention mechanism to capture more contextual information. However, these methods with self-attention mechanism need a huge computation. In order to solve this problem, a novel self-attention network, called FFANet, is designed to efficiently capture contextual information, which reduces the amount of calculation through strip pooling and linear layers. It proposes the feature fusion (FF) module to calculate the affinity matrix. The affinity matrix can capture the relationship between pixels. Then we multiply the affinity matrix with the feature map, which can selectively increase the weight of the region of interest. Extensive experiments on the public datasets (PASCAL VOC2012, CityScapes) and remote sensing dataset (DLRSD) have been conducted and achieved Mean Iou score 74.5%, 70.3%, and 63.9% respectively. Compared with the current typical algorithms, the proposed method has achieved excellent performance.


Author(s):  
Marina Milosevic ◽  
Dragan Jankovic ◽  
Aleksandar Peulic

AbstractIn this paper, we present a system based on feature extraction techniques for detecting abnormal patterns in digital mammograms and thermograms. A comparative study of texture-analysis methods is performed for three image groups: mammograms from the Mammographic Image Analysis Society mammographic database; digital mammograms from the local database; and thermography images of the breast. Also, we present a procedure for the automatic separation of the breast region from the mammograms. Computed features based on gray-level co-occurrence matrices are used to evaluate the effectiveness of textural information possessed by mass regions. A total of 20 texture features are extracted from the region of interest. The ability of feature set in differentiating abnormal from normal tissue is investigated using a support vector machine classifier, Naive Bayes classifier and K-Nearest Neighbor classifier. To evaluate the classification performance, five-fold cross-validation method and receiver operating characteristic analysis was performed.


2014 ◽  
Vol 626 ◽  
pp. 65-71
Author(s):  
V. Amsaveni ◽  
N. Albert Singh ◽  
J. Dheeba

In this paper, a Computer aided classification approach using Cascaded Correlation Neural Network for detection of brain tumor from MRI is proposed. Cascaded Correlation Neural Network is a nonlinear classifier which is formulated as a supervised learning problem and the classifier was applied to determine at each pixel location in the MRI if the tumor is present or not. Gabor texture features are taken from the image Region of interest (ROI). The extracted Gabor features from MRI is given as input to the proposed classifier. The method was applied to real time images from the collected from diagnostic centers. Based on the analysis the performance of the proposed cascaded correlation neural network classifier is superior when compared with other classification approaches.


Author(s):  
Ida Nurhaida ◽  
Hong Wei ◽  
Remmy A. M. Zen ◽  
Ruli Manurung ◽  
Aniati M. Arymurthy

<p>This paper systematically investigates the effect of image texture features on batik motif retrieval performance. The retrieval process uses a query motif image to find matching motif images in a database. In this study, feature fusion of various image texture features such as Gabor, Log-Gabor, Grey Level Co-Occurrence Matrices (GLCM), and Local Binary Pattern (LBP) features are attempted in motif image retrieval. With regards to performance evaluation, both individual features and fused feature sets are applied. Experimental results show that optimal feature fusion outperforms individual features in batik motif retrieval. Among the individual features tested, Log-Gabor features provide the best result. The proposed approach is best used in a scenario where a query image containing multiple basic motif objects is applied to a dataset in which retrieved images also contain multiple motif objects. The retrieval rate achieves 84.54% for the rank 3 precision when the feature space is fused with Gabor, GLCM and Log-Gabor features. The investigation also shows that the proposed method does not work well for a retrieval scenario where the query image contains multiple basic motif objects being applied to a dataset in which the retrieved images only contain one basic motif object.</p>


2021 ◽  
Vol 11 ◽  
Author(s):  
Chen-Xi Liu ◽  
Li-Jun Heng ◽  
Yu Han ◽  
Sheng-Zhong Wang ◽  
Lin-Feng Yan ◽  
...  

ObjectiveTo explore the usefulness of texture signatures based on multiparametric magnetic resonance imaging (MRI) in predicting the subtypes of growth hormone (GH) pituitary adenoma (PA).MethodsForty-nine patients with GH-secreting PA confirmed by the pathological analysis were included in this retrospective study. Texture parameters based on T1-, T2-, and contrast-enhanced T1-weighted images (T1C) were extracted and compared for differences between densely granulated (DG) and sparsely granulated (SG) somatotroph adenoma by using two segmentation methods [region of interest 1 (ROI1), excluding the cystic/necrotic portion, and ROI2, containing the whole tumor]. Receiver operating characteristic (ROC) curve analysis was performed to determine the differentiating efficacy.ResultsAmong 49 included patients, 24 were DG and 25 were SG adenomas. Nine optimal texture features with significant differences between two groups were obtained from ROI1. Based on the ROC analyses, T1WI signatures from ROI1 achieved the highest diagnostic efficacy with an AUC of 0.918, the accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were 85.7, 72.0, 100.0, 100.0, and 77.4%, respectively, for differentiating DG from SG. Comparing with the T1WI signature, the T1C signature obtained relatively high efficacy with an AUC of 0.893. When combining the texture features of T1WI and T1C, the radiomics signature also had a good performance in differentiating the two groups with an AUC of 0.908. In addition, the performance got in all the signatures from ROI2 was lower than those in the corresponding signature from ROI1.ConclusionTexture signatures based on MR images may be useful biomarkers to differentiate subtypes of GH-secreting PA patients.


2021 ◽  
Vol 23 (06) ◽  
pp. 108-112
Author(s):  
Kiran S M ◽  
◽  
Dr. Chandrappa D N ◽  

Disease detection in plants is one of the essential steps in the field of agriculture to improve the quality and yield of crops. Applications of image processing play a major role in the early detection of diseases and also in terms of accuracy and time consumption. In many plant health monitoring systems, Fourier and wavelet transform is applied for feature extraction from plant images and then they are classified using different classifiers. In this study, tomato leaf images are collected from the PlantVillage database, images are preprocessed to improve the contrast, and then image segmentation is performed using the k-means clustering technique. Texture features are extracted from the region of interest using Discrete Wavelet Transforms (DWT). Fourteen image features obtained from Daubechies (db3), Symlet (sym3), and biorthogonal (Bior 3.3, Bior 3.5, Bior 3.7) wavelets. These features are used to classify the images as healthy and unhealthy with the help of the Support Vector Machine (SVM) classifier. Performance of the system is measured in terms of Sensitivity, Specificity, and Accuracy. The proposed system classifies the images with a sensitivity of 92%, specificity of 84%, and accuracy of 88%.


Author(s):  
Bixin Cai ◽  
Qidong Wang ◽  
Wuwei Chen ◽  
Linfeng Zhao ◽  
Huiran Wang

Vehicle detection plays a crucial role in the decision-making, planning, and control of intelligent vehicles. It is one of the main tasks of environmental perception and an essential part of ensuring driving safety. In order to capture unique vehicle features and improve vehicle recognition efficiency, this paper fuses texture features of image and edge features of LIDAR to detect frontal vehicle targets. First, we use wavelet analysis and geometric analysis to segment the ground and determine the region of interest for vehicle detection. Then, the point cloud of the vehicle detected is projected into the image to locate the ROI. Moreover, the edge feature of the vehicle is guided to extract according to the maximum gradient direction of the vehicle’s rear contour. Furthermore, the Haar texture feature is integrated to identify the vehicle, and a filter is designed according to the point cloud’s spatial distribution to eliminate the error targets. Finally, it is verified by real-vehicle comparison tests that the proposed fusion method can effectively improve the vehicles’ detection with not much time.


Author(s):  
Ida Nurhaida ◽  
Hong Wei ◽  
Remmy A. M. Zen ◽  
Ruli Manurung ◽  
Aniati M. Arymurthy

<p>This paper systematically investigates the effect of image texture features on batik motif retrieval performance. The retrieval process uses a query motif image to find matching motif images in a database. In this study, feature fusion of various image texture features such as Gabor, Log-Gabor, Grey Level Co-Occurrence Matrices (GLCM), and Local Binary Pattern (LBP) features are attempted in motif image retrieval. With regards to performance evaluation, both individual features and fused feature sets are applied. Experimental results show that optimal feature fusion outperforms individual features in batik motif retrieval. Among the individual features tested, Log-Gabor features provide the best result. The proposed approach is best used in a scenario where a query image containing multiple basic motif objects is applied to a dataset in which retrieved images also contain multiple motif objects. The retrieval rate achieves 84.54% for the rank 3 precision when the feature space is fused with Gabor, GLCM and Log-Gabor features. The investigation also shows that the proposed method does not work well for a retrieval scenario where the query image contains multiple basic motif objects being applied to a dataset in which the retrieved images only contain one basic motif object.</p>


Author(s):  
Srinivasan A ◽  
Sudha S

One of the main causes of blindness is diabetic retinopathy (DR) and it may affect people of any ages. In these days, both young and old ages are affected by diabetes, and the di abetes is the main cause of DR. Hence, it is necessary to have an automated system with good accuracy and less computation time to diagnose and treat DR, and the automated system can simplify the work of ophthalmologists. The objective is to present an overview of various works recently in detecting and segmenting the various lesions of DR. Papers were categorized based on the diagnosing tools and the methods used for detecting early and advanced stage lesions. The early lesions of DR are microaneurysms, hemorrhages, exudates, and cotton wool spots and in the advanced stage, new and fragile blood vessels can be grown. Results have been evaluated in terms of sensitivity, specificity, accuracy and receiver operating characteristic curve. This paper analyzed the various steps and different algorithms used recently for the detection and classification of DR lesions. A comparison of performances has been made in terms of sensitivity, specificity, area under the curve, and accuracy. Suggestions, future workand the area to be improved were also discussed.Keywords: Diabetic retinopathy, Image processing, Morphological operations, Neural network, Fuzzy logic. 


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2547 ◽  
Author(s):  
Wenxin Dai ◽  
Yuqing Mao ◽  
Rongao Yuan ◽  
Yijing Liu ◽  
Xuemei Pu ◽  
...  

Convolution neural network (CNN)-based detectors have shown great performance on ship detections of synthetic aperture radar (SAR) images. However, the performance of current models has not been satisfactory enough for detecting multiscale ships and small-size ones in front of complex backgrounds. To address the problem, we propose a novel SAR ship detector based on CNN, which consist of three subnetworks: the Fusion Feature Extractor Network (FFEN), Region Proposal Network (RPN), and Refine Detection Network (RDN). Instead of using a single feature map, we fuse feature maps in bottom–up and top–down ways and generate proposals from each fused feature map in FFEN. Furthermore, we further merge features generated by the region-of-interest (RoI) pooling layer in RDN. Based on the feature representation strategy, the CNN framework constructed can significantly enhance the location and semantics information for the multiscale ships, in particular for the small ships. On the other hand, the residual block is introduced to increase the network depth, through which the detection precision could be further improved. The public SAR ship dataset (SSDD) and China Gaofen-3 satellite SAR image are used to validate the proposed method. Our method shows excellent performance for detecting the multiscale and small-size ships with respect to some competitive models and exhibits high potential in practical application.


Sign in / Sign up

Export Citation Format

Share Document