Image texture classification using a multiagent genetic clustering algorithm

Author(s):  
Jiulei Geng ◽  
Jing Liu
2020 ◽  
Vol 2020 (10) ◽  
pp. 310-1-310-7
Author(s):  
Khalid Omer ◽  
Luca Caucci ◽  
Meredith Kupinski

This work reports on convolutional neural network (CNN) performance on an image texture classification task as a function of linear image processing and number of training images. Detection performance of single and multi-layer CNNs (sCNN/mCNN) are compared to optimal observers. Performance is quantified by the area under the receiver operating characteristic (ROC) curve, also known as the AUC. For perfect detection AUC = 1.0 and AUC = 0.5 for guessing. The Ideal Observer (IO) maximizes AUC but is prohibitive in practice because it depends on high-dimensional image likelihoods. The IO performance is invariant to any fullrank, invertible linear image processing. This work demonstrates the existence of full-rank, invertible linear transforms that can degrade both sCNN and mCNN even in the limit of large quantities of training data. A subsequent invertible linear transform changes the images’ correlation structure again and can improve this AUC. Stationary textures sampled from zero mean and unequal covariance Gaussian distributions allow closed-form analytic expressions for the IO and optimal linear compression. Linear compression is a mitigation technique for high-dimension low sample size (HDLSS) applications. By definition, compression strictly decreases or maintains IO detection performance. For small quantities of training data, linear image compression prior to the sCNN architecture can increase AUC from 0.56 to 0.93. Results indicate an optimal compression ratio for CNN based on task difficulty, compression method, and number of training images.


2007 ◽  
Vol 16 (06) ◽  
pp. 919-934
Author(s):  
YONGGUO LIU ◽  
XIAORONG PU ◽  
YIDONG SHEN ◽  
ZHANG YI ◽  
XIAOFENG LIAO

In this article, a new genetic clustering algorithm called the Improved Hybrid Genetic Clustering Algorithm (IHGCA) is proposed to deal with the clustering problem under the criterion of minimum sum of squares clustering. In IHGCA, the improvement operation including five local iteration methods is developed to tune the individual and accelerate the convergence speed of the clustering algorithm, and the partition-absorption mutation operation is designed to reassign objects among different clusters. By experimental simulations, its superiority over some known genetic clustering methods is demonstrated.


Author(s):  
Abbas F. H. Alharan ◽  
Hayder K. Fatlawi ◽  
Nabeel Salih Ali

<p>Computer vision and pattern recognition applications have been counted serious research trends in engineering technology and scientific research content. These applications such as texture image analysis and its texture feature extraction. Several studies have been done to obtain accurate results in image feature extraction and classifications, but most of the extraction and classification studies have some shortcomings. Thus, it is substantial to amend the accuracy of the classification via minify the dimension of feature sets. In this paper, presents a cluster-based feature selection approach to adopt more discriminative subset texture features based on three different texture image datasets. Multi-step are conducted to implement the proposed approach. These steps involve texture feature extraction via Gray Level Co-occurrence Matrix (GLCM), Local Binary Pattern (LBP) and Gabor filter. The second step is feature selection by using K-means clustering algorithm based on five feature evaluation metrics which are infogain, Gain ratio, oneR, ReliefF, and symmetric. Finally, K-Nearest Neighbor (KNN), Naive Bayes (NB) and Support Vector Machine (SVM) classifiers are used to evaluate the proposed classification performance and accuracy. Research achieved better classification accuracy and performance using KNN and NB classifiers that were 99.9554% for Kelberg dataset and 99.0625% for SVM in Brodatz-1 and Brodatz-2 datasets consecutively. Conduct a comparison to other studies to give a unified view of the quality of the results and identify the future research directions.</p>


Effective software system must advance to stay pertinent, however this procedure of development can cause the product design to rot and prompt essentially diminished efficiency and even dropped projects. Remodularization tasks can be performed to fix the structure of a software system and evacuate the disintegration brought about by programming advancement. Software remodularization comprises in rearranging software entities into modules to such an extent that sets of substances having a place with similar modules are more comparable than those having a place with various modules.However, re-modularizing systems automatically is challenging in order to enhance their sustainability. In this paper, we have introduced a procedure of automatic software remodularization that helps software maintainers to enhance the software modularization quality by assessing the coupling and attachment among programming components. For precision coupling measures, the proposed technology uses structural coupling measurements. The proposed methodology utilizes tallying of class' part capacities utilized by a given class as a basic coupling measure among classes. The interaction between class files measures structural connections between software elements (classes). In this paper, probability based remodularization (PBR) approach has been proposed to remodularize the software systems. The file ordering process is done by performing probability based approach and remodularization is done based on the dependency strength or connectivity among the files. The proposed technique is experimented on seven software systems. The efficiency is measured by utilizing Turbo Modularization Quality (MQ) that promotes edge weighing module dependence graph (MDG). It very well may be presumed that when comparing performance with the subsisting techniques, for instance, Bunch – GA (Genetic Algorithm), DAGC (Development of Genetic Clustering Algorithm) and Estimation of Distribution Algorithm (EDA), the proposed methodology has greater Turbo MQ value and lesser time complexity with Bunch-GA in the software systems assessed


2004 ◽  
Vol 25 (19) ◽  
pp. 4043-4050 ◽  
Author(s):  
Yao-Wei Wang ◽  
Yan-Fei Wang ◽  
Yong Xue ◽  
Wen Gao

Author(s):  
Lan Gao ◽  
Qingguo Song ◽  
Chuang Li ◽  
Qing Hua ◽  
Chuang Yang

Sign in / Sign up

Export Citation Format

Share Document