Performance Analysis of Anisotropic Diffusion Based Colour Texture Descriptors in Industrial Applications

Author(s):  
Prakash S. Hiremath ◽  
Rohini A. Bhusnurmath

A novel method of colour texture analysis based on anisotropic diffusion for industrial applications is proposed and the performance analysis of colour texture descriptors is examined. The objective of the study is to explore different colour spaces for their suitability in automatic classification of certain textures in industrial applications, namely, granite tiles and wood textures, using computer vision. The directional subbands of digital image of material samples obtained using wavelet transform are subjected to anisotropic diffusion to obtain the texture components. Further, statistical features are extracted from the texture components. The linear discriminant analysis is employed to achieve class separability. The texture descriptors are evaluated on RGB, HSV, YCbCr, Lab colour spaces and compared with gray scale texture descriptors. The k-NN classifier is used for texture classification. For the experimentation, benchmark databases, namely, MondialMarmi and Parquet are considered. The experimental results are encouraging as compared to the state-of-the-art-methods.

Author(s):  
Rohini A. Bhusnurmath ◽  
Prakash S. Hiremath

This chapter proposes the framework for computer vision algorithm for industrial application. The proposed framework uses wavelet transform to obtain the multiresolution images. Anisotropic diffusion is employed to obtain the texture component. Various feature sets and their combinations are considered obtained from texture component. Linear discriminant analysis is employed to get the distinguished features. The k-NN classifier is used for classification. The proposed method is experimented on benchmark datasets for texture classification. Further, the method is extended to exploration of different color spaces for finding reference standard. The thrust area of industrial applications for machine intelligence in computer vision is considered. The industrial datasets, namely, MondialMarmi dataset for granite tiles and Parquet dataset for wood textures are experimented. It was observed that the combination of features performs better in YCbCr and HSV color spaces for MondialMarmi and Parquet datasets as compared to the other methods in literature.


Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1459 ◽  
Author(s):  
Tamás Czimmermann ◽  
Gastone Ciuti ◽  
Mario Milazzo ◽  
Marcello Chiurazzi ◽  
Stefano Roccella ◽  
...  

This paper reviews automated visual-based defect detection approaches applicable to various materials, such as metals, ceramics and textiles. In the first part of the paper, we present a general taxonomy of the different defects that fall in two classes: visible (e.g., scratches, shape error, etc.) and palpable (e.g., crack, bump, etc.) defects. Then, we describe artificial visual processing techniques that are aimed at understanding of the captured scenery in a mathematical/logical way. We continue with a survey of textural defect detection based on statistical, structural and other approaches. Finally, we report the state of the art for approaching the detection and classification of defects through supervised and non-supervised classifiers and deep learning.


Author(s):  
SHAIKHJI ZAID M ◽  
J B JADHAV ◽  
V N KAPADIA

Textures play important roles in many image processing applications, since images of real objects often do not exhibit regions of uniform and smooth intensities, but variations of intensities with certain repeated structures or patterns, referred to as visual texture. The textural patterns or structures mainly result from the physical surface properties, such as roughness or oriented structured of a tactile quality. It is widely recognized that a visual texture, which can easily perceive, is very difficult to define. The difficulty results mainly from the fact that different people can define textures in applications dependent ways or with different perceptual motivations, and they are not generally agreed upon single definition of texture [1]. The development in multi-resolution analysis such as Gabor and wavelet transform help to overcome this difficulty. In this paper it describes that, texture classification using Wavelet Statistical Features (WSF), Wavelet Co-occurrence Features (WCF) and a combination of wavelet statistical features and co-occurrence features of wavelet transformed images with different feature databases can results better [2]. Several Image degrading parameters are introduced in the image to be classified for verifying the features. Wavelet based decomposing is used to classify the image with code prepared in MATLAB.


Author(s):  
Hafiz Malik

This chapter provides critical analysis of current state-of-the-art in steganography. First part of the this chapter provides the classification of steganography based on the underlying information hiding methodology used and covert-channel type, and desired features of the information hiding used for covert communication. This chapter also discusses various known steganalysis techniques developed to counteract the covert-communication and highlights limitations of existing steganographic techniques. Performance analysis of commonly used shareware/freeware steganographic tools and steganalysis tools is also provided in this chapter. Some open problems in covert-communication are also discussed.


Author(s):  
Jingying Zhao ◽  
Na Dong ◽  
Hai Guo ◽  
Yifan Liu ◽  
Doudou Yang

In view of the different recognition methods of Dai in different language, we proposed a novel method of text line recognition for New Tai Lue and Lanna Dai based on statistical characteristics of texture analysis and Deep Gaussian process, which can classify different Dai text lines. First, the Dai text line database is constructed, and the images are preprocessed by de-noise and size standardization. Gabor multi-scale decomposition is carried out on two Dai text line images, and then the statistical features of image entropy and average row variance feature is extracted. The multi-layers Deep Gaussian process classifier is constructed. Experiments show that the accuracy of text line classification of New Tai Lue and Lanna Dai based on Deep Gaussian process is 99.89%, the values of precision, recall and f1-score are 1, 0.9978 and 0.9989, respectively. The combination of Gabor texture analysis average row variance statistical features and Deep Gaussian process model can effectively classify the text line of New Tai Lue and Lanna Dai. Comparative experiments show that the classification accuracy of the model is superior to traditional methods, such as Gaussian Naive Bayes, Random Forest, Decision Tree, and Gaussian Process.


2015 ◽  
Vol 27 (02) ◽  
pp. 1550015 ◽  
Author(s):  
Assya Bousbia-Salah ◽  
Malika Talha-Kedir

Wavelet transform decomposition of electroencephalogram (EEG) signals has been widely used for the analysis and detection of epileptic seizure of patients. However, the classification of EEG signals is still challenging because of high nonstationarity and high dimensionality. The aim of this work is an automatic classification of the EEG recordings by using statistical features extraction and support vector machine. From a real database, two sets of EEG signals are used: EEG recorded from a healthy person and from an epileptic person during epileptic seizures. Three important statistical features are computed at different sub-bands discrete wavelet and wavelet packet decomposition of EEG recordings. In this study, to select the best wavelet for our application, five wavelet basis functions are considered for processing EEG signals. After reducing the dimension of the obtained data by linear discriminant analysis and principal component analysis (PCA), feature vectors are used to model and to train the efficient support vector machine classifier. In order to show the efficiency of this approach, the statistical classification performances are evaluated, and a rate of 100% for the best classification accuracy is obtained and is compared with those obtained in other studies for the same dataset. However, this method is not meant to replace the clinician but can assist him for his diagnosis and reinforce his decision.


2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Aysen Degerli ◽  
Mete Ahishali ◽  
Mehmet Yamac ◽  
Serkan Kiranyaz ◽  
Muhammad E. H. Chowdhury ◽  
...  

AbstractComputer-aided diagnosis has become a necessity for accurate and immediate coronavirus disease 2019 (COVID-19) detection to aid treatment and prevent the spread of the virus. Numerous studies have proposed to use Deep Learning techniques for COVID-19 diagnosis. However, they have used very limited chest X-ray (CXR) image repositories for evaluation with a small number, a few hundreds, of COVID-19 samples. Moreover, these methods can neither localize nor grade the severity of COVID-19 infection. For this purpose, recent studies proposed to explore the activation maps of deep networks. However, they remain inaccurate for localizing the actual infestation making them unreliable for clinical use. This study proposes a novel method for the joint localization, severity grading, and detection of COVID-19 from CXR images by generating the so-called infection maps. To accomplish this, we have compiled the largest dataset with 119,316 CXR images including 2951 COVID-19 samples, where the annotation of the ground-truth segmentation masks is performed on CXRs by a novel collaborative human–machine approach. Furthermore, we publicly release the first CXR dataset with the ground-truth segmentation masks of the COVID-19 infected regions. A detailed set of experiments show that state-of-the-art segmentation networks can learn to localize COVID-19 infection with an F1-score of 83.20%, which is significantly superior to the activation maps created by the previous methods. Finally, the proposed approach achieved a COVID-19 detection performance with 94.96% sensitivity and 99.88% specificity.


Sign in / Sign up

Export Citation Format

Share Document