scholarly journals Image classification using concatenated of co-occurrence matrix features and local ternary patterns

Author(s):  
Faeze Kiani

Texture, color, and shape are the three main components that the human visual brain uses to identify or identify environments and objects. Therefore, tissue classification has been considered by many scientific researchers in the last decade. The texture features can be used in many different vision and machine learning problems. As of now, various methods have been proposed for classifying tissues. In all methods, the accuracy of the classification is a major challenge that needs to be improved. This article presents a new method based on a combination of two efficient tissue descriptors, the co-occurrence matrix and local ternary patterns (LTP). First, the local binary pattern and LTP are performed to extract information from the local tissue. In the next step, a subset of statistical properties is extracted from the gray surface concurrency matrices. Finally, the interconnected features are used to teach classification. Performance is evaluated for accuracy on the Brodatz reference data set. The experimental results show that the proposed method offers a higher degree of classification compared to some advanced methods.

2020 ◽  
Vol 16 (6) ◽  
pp. 421-429
Author(s):  
Praveen Kumar Moganam ◽  
Denis Ashok Sathia Seelan

Detection of defects in a typical leather surface is a difficult task due to the complex, non-homogenous and random nature of texture pattern. This paper presents a texture analysis based leather defect identification approach using a neural network classification of defective and non-defective leather. In this work, Gray Level Co-occurrence Matrix (GLCM) is used for extracting different statistical texture features of defective and non-defective leather. Based on the labelled data set of texture features, perceptron neural network classifier is trained for defect identification. Five commonly occurring leather defects such as folding marks, grain off, growth marks, loose grain and pin holes were detected and the classification results of perceptron network are presented. Proposed method was tested for the image library of 1232 leather samples and the accuracy of classification for the defect detection using confusion matrix is found to be 94.2%. Proposed method can be implemented in the industrial environment for the automation of leather inspection process.


2014 ◽  
Vol 13 (12) ◽  
pp. 5286-5300 ◽  
Author(s):  
A. Srinivasa Rao ◽  
V.Venkata Krishna ◽  
Prof.YKSundara Krishna

The present paper derived a new model of texture image retrieval by integrating the transitions on Local Binary Pattern (LBP) with textons and Grey Level Co-occurrence Matrix (GLCM). The present paper initially derived transitions that occur from 0 to 1 or 1 to 0 in circular manner on LBP. The transitions reduce the 256 LBP codes into five texture features. This reduces the lot of complexity. The LBP codes are rotationally variant. The proposed circular transitions on LBP are rotationally invariant. Textons,which represents the local relationships,are detected on this. The GLCM features are evaluated on the texton based image for efficient image retrieval. The proposed method is experimented on a huge data base of textures collected from Google data base. The experimental result indicates the efficiency of the proposed model.


2021 ◽  
Vol 13 (4) ◽  
pp. 552
Author(s):  
Johannes Lohse ◽  
Anthony P. Doulgeris ◽  
Wolfgang Dierking

Robust and reliable classification of sea ice types in synthetic aperture radar (SAR) images is needed for various operational and environmental applications. Previous studies have investigated the class-dependent decrease in SAR backscatter intensity with incident angle (IA); others have shown the potential of textural information to improve automated image classification. In this work, we investigate the inclusion of Sentinel-1 (S1) texture features into a Bayesian classifier that accounts for linear per-class variation of its features with IA. We use the S1 extra-wide swath (EW) product in ground-range detected format at medium resolution (GRDM), and we compute seven grey level co-occurrence matrix (GLCM) texture features from the HH and the HV backscatter intensity in the linear and logarithmic domain. While GLCM texture features obtained in the linear domain vary significantly with IA, the features computed from the logarithmic intensity do not depend on IA or reveal only a weak, approximately linear dependency. They can therefore be directly included in the IA-sensitive classifier that assumes a linear variation. The different number of looks in the first sub-swath (EW1) of the product causes a distinct offset in texture at the sub-swath boundary between EW1 and the second sub-swath (EW2). This offset must be considered when using texture in classification; we demonstrate a manual correction for the example of GLCM contrast. Based on the Jeffries–Matusita distance between class histograms, we perform a separability analysis for 57 different GLCM parameter settings. We select a suitable combination of features for the ice classes in our data set and classify several test images using a combination of intensity and texture features. We compare the results to a classifier using only intensity. Particular improvements are achieved for the generalized separation of ice and water, as well as the classification of young ice and multi-year ice.


Author(s):  
Weiguo Cao ◽  
Marc J. Pomeroy ◽  
Yongfeng Gao ◽  
Matthew A. Barish ◽  
Almas F. Abbasi ◽  
...  

AbstractTexture features have played an essential role in the field of medical imaging for computer-aided diagnosis. The gray-level co-occurrence matrix (GLCM)-based texture descriptor has emerged to become one of the most successful feature sets for these applications. This study aims to increase the potential of these features by introducing multi-scale analysis into the construction of GLCM texture descriptor. In this study, we first introduce a new parameter - stride, to explore the definition of GLCM. Then we propose three multi-scaling GLCM models according to its three parameters, (1) learning model by multiple displacements, (2) learning model by multiple strides (LMS), and (3) learning model by multiple angles. These models increase the texture information by introducing more texture patterns and mitigate direction sparsity and dense sampling problems presented in the traditional Haralick model. To further analyze the three parameters, we test the three models by performing classification on a dataset of 63 large polyp masses obtained from computed tomography colonoscopy consisting of 32 adenocarcinomas and 31 benign adenomas. Finally, the proposed methods are compared to several typical GLCM-texture descriptors and one deep learning model. LMS obtains the highest performance and enhances the prediction power to 0.9450 with standard deviation 0.0285 by area under the curve of receiver operating characteristics score which is a significant improvement.


2021 ◽  
Vol 13 (6) ◽  
pp. 1146
Author(s):  
Yuliang Nie ◽  
Qiming Zeng ◽  
Haizhen Zhang ◽  
Qing Wang

Synthetic aperture radar (SAR) is an effective tool in detecting building damage. At present, more and more studies detect building damage using a single post-event fully polarimetric SAR (PolSAR) image, because it permits faster and more convenient damage detection work. However, the existence of non-buildings and obliquely-oriented buildings in disaster areas presents a challenge for obtaining accurate detection results using only post-event PolSAR data. To solve these problems, a new method is proposed in this work to detect completely collapsed buildings using a single post-event full polarization SAR image. The proposed method makes two improvements to building damage detection. First, it provides a more effective solution for non-building area removal in post-event PolSAR images. By selecting and combining three competitive polarization features, the proposed solution can remove most non-building areas effectively, including mountain vegetation and farmland areas, which are easily confused with collapsed buildings. Second, it significantly improves the classification performance of collapsed and standing buildings. A new polarization feature was created specifically for the classification of obliquely-oriented and collapsed buildings via development of the optimization of polarimetric contrast enhancement (OPCE) matching algorithm. Using this developed feature combined with texture features, the proposed method effectively distinguished collapsed and obliquely-oriented buildings, while simultaneously also identifying the affected collapsed buildings in error-prone areas. Experiments were implemented on three PolSAR datasets obtained in fully polarimetric mode: Radarsat-2 PolSAR data from the 2010 Yushu earthquake in China (resolution: 12 m, scale of the study area: ); ALOS PALSAR PolSAR data from the 2011 Tohoku tsunami in Japan (resolution: 23.14 m, scale of the study area: ); and ALOS-2 PolSAR data from the 2016 Kumamoto earthquake in Japan (resolution: 5.1 m, scale of the study area: ). Through the experiments, the proposed method was proven to obtain more than 90% accuracy for built-up area extraction in post-event PolSAR data. The achieved detection accuracies of building damage were 82.3%, 97.4%, and 78.5% in Yushu, Ishinomaki, and Mashiki town study sites, respectively.


Sensors ◽  
2017 ◽  
Vol 17 (3) ◽  
pp. 559 ◽  
Author(s):  
Alan Bourke ◽  
Espen Ihlen ◽  
Ronny Bergquist ◽  
Per Wik ◽  
Beatrix Vereijken ◽  
...  

2020 ◽  
Vol 43 (1) ◽  
pp. 29-45
Author(s):  
Alex Noel Joseph Raj ◽  
Ruban Nersisson ◽  
Vijayalakshmi G. V. Mahesh ◽  
Zhemin Zhuang

Nipple is a vital landmark in the breast lesion diagnosis. Although there are advanced computer-aided detection (CADe) systems for nipple detection in breast mediolateral oblique (MLO) views of mammogram images, few academic works address the coronal views of breast ultrasound (BUS) images. This paper addresses a novel CADe system to locate the Nipple Shadow Area (NSA) in ultrasound images. Here the Hu Moments and Gray-level Co-occurrence Matrix (GLCM) were calculated through an iterative sliding window for the extraction of shape and texture features. These features are then concatenated and fed into an Artificial Neural Network (ANN) to obtain probable NSA’s. Later, contour features, such as shape complexity through fractal dimension, edge distance from the periphery and contour area, were computed and passed into a Support Vector Machine (SVM) to identify the accurate NSA in each case. The coronal plane BUS dataset is built upon our own, which consists of 64 images from 13 patients. The test results show that the proposed CADe system achieves 91.99% accuracy, 97.55% specificity, 82.46% sensitivity and 88% F-score on our dataset.


2014 ◽  
Vol 668-669 ◽  
pp. 1041-1044
Author(s):  
Lin Lin Song ◽  
Qing Hu Wang ◽  
Zhi Li Pei

This paper firstly studies the texture features. We construct a gray-difference primitive co-occurrence matrix to extract texture features by combining statistical methods with structural ones. The experiment results show that the features of the gray-difference primitive co-occurrence matrix are more delicate than the traditional gray co-occurrence matrix.


BMC Cancer ◽  
2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Sihua Niu ◽  
Jianhua Huang ◽  
Jia Li ◽  
Xueling Liu ◽  
Dan Wang ◽  
...  

Abstract Background The classification of Breast Imaging Reporting and Data System 4A (BI-RADS 4A) lesions is mostly based on the personal experience of doctors and lacks specific and clear classification standards. The development of artificial intelligence (AI) provides a new method for BI-RADS categorisation. We analysed the ultrasonic morphological and texture characteristics of BI-RADS 4A benign and malignant lesions using AI, and these ultrasonic characteristics of BI-RADS 4A benign and malignant lesions were compared to examine the value of AI in the differential diagnosis of BI-RADS 4A benign and malignant lesions. Methods A total of 206 lesions of BI-RADS 4A examined using ultrasonography were analysed retrospectively, including 174 benign lesions and 32 malignant lesions. All of the lesions were contoured manually, and the ultrasonic morphological and texture features of the lesions, such as circularity, height-to-width ratio, margin spicules, margin coarseness, margin indistinctness, margin lobulation, energy, entropy, grey mean, internal calcification and angle between the long axis of the lesion and skin, were calculated using grey level gradient co-occurrence matrix analysis. Differences between benign and malignant lesions of BI-RADS 4A were analysed. Results Significant differences in margin lobulation, entropy, internal calcification and ALS were noted between the benign group and malignant group (P = 0.013, 0.045, 0.045, and 0.002, respectively). The malignant group had more margin lobulations and lower entropy compared with the benign group, and the benign group had more internal calcifications and a greater angle between the long axis of the lesion and skin compared with the malignant group. No significant differences in circularity, height-to-width ratio, margin spicules, margin coarseness, margin indistinctness, energy, and grey mean were noted between benign and malignant lesions. Conclusions Compared with the naked eye, AI can reveal more subtle differences between benign and malignant BI-RADS 4A lesions. These results remind us carefully observation of the margin and the internal echo is of great significance. With the help of morphological and texture information provided by AI, doctors can make a more accurate judgment on such atypical benign and malignant lesions.


Sign in / Sign up

Export Citation Format

Share Document