scholarly journals Symmetric mean and directional contour pattern for texture classification

2021 ◽  
Author(s):  
Yongsheng Dong ◽  
Boshi Zheng ◽  
Hong Liu ◽  
Zhiyong Zhang ◽  
Zhumu Fu
Author(s):  
Philip D. Hren

The pattern of bend contours which appear in the TEM image of a bent or curled sample indicates the shape into which the specimen is bent. Several authors have characterized the shape of their bent foils by this method, most recently I. Bolotov, as well as G. Möllenstedt and O. Rang in the early 1950’s. However, the samples they considered were viewed at orientations away from a zone axis, or at zone axes of low symmetry, so that dynamical interactions between the bend contours did not occur. Their calculations were thus based on purely geometric arguments. In this paper bend contours are used to measure deflections of a single-crystal silicon membrane at the (111) zone axis, where there are strong dynamical effects. Features in the bend contour pattern are identified and associated with a particular angle of bending of the membrane by reference to large-angle convergent-beam electron diffraction (LACBED) patterns.


2020 ◽  
Vol 2020 (10) ◽  
pp. 310-1-310-7
Author(s):  
Khalid Omer ◽  
Luca Caucci ◽  
Meredith Kupinski

This work reports on convolutional neural network (CNN) performance on an image texture classification task as a function of linear image processing and number of training images. Detection performance of single and multi-layer CNNs (sCNN/mCNN) are compared to optimal observers. Performance is quantified by the area under the receiver operating characteristic (ROC) curve, also known as the AUC. For perfect detection AUC = 1.0 and AUC = 0.5 for guessing. The Ideal Observer (IO) maximizes AUC but is prohibitive in practice because it depends on high-dimensional image likelihoods. The IO performance is invariant to any fullrank, invertible linear image processing. This work demonstrates the existence of full-rank, invertible linear transforms that can degrade both sCNN and mCNN even in the limit of large quantities of training data. A subsequent invertible linear transform changes the images’ correlation structure again and can improve this AUC. Stationary textures sampled from zero mean and unequal covariance Gaussian distributions allow closed-form analytic expressions for the IO and optimal linear compression. Linear compression is a mitigation technique for high-dimension low sample size (HDLSS) applications. By definition, compression strictly decreases or maintains IO detection performance. For small quantities of training data, linear image compression prior to the sCNN architecture can increase AUC from 0.56 to 0.93. Results indicate an optimal compression ratio for CNN based on task difficulty, compression method, and number of training images.


2014 ◽  
Vol 1 (3) ◽  
pp. 23-31
Author(s):  
Basava Raju ◽  
◽  
K. Y. Rama Devi ◽  
P. V. Kumar ◽  
◽  
...  

2016 ◽  
Author(s):  
Zilong Zou ◽  
Jie Yang ◽  
Vasileios Megalooikonomou ◽  
Rachid Jennane ◽  
Erkang Cheng ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 1010
Author(s):  
Claudio Cusano ◽  
Paolo Napoletano ◽  
Raimondo Schettini

In this paper we present T1K+, a very large, heterogeneous database of high-quality texture images acquired under variable conditions. T1K+ contains 1129 classes of textures ranging from natural subjects to food, textile samples, construction materials, etc. T1K+ allows the design of experiments especially aimed at understanding the specific issues related to texture classification and retrieval. To help the exploration of the database, all the 1129 classes are hierarchically organized in 5 thematic categories and 266 sub-categories. To complete our study, we present an evaluation of hand-crafted and learned visual descriptors in supervised texture classification tasks.


Sign in / Sign up

Export Citation Format

Share Document