texture representation
Recently Published Documents


TOTAL DOCUMENTS

110
(FIVE YEARS 27)

H-INDEX

15
(FIVE YEARS 2)

Crystals ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 1021
Author(s):  
Aditya Vuppala ◽  
Alexander Krämer ◽  
Johannes Lohmar

The amount of orientation difference of crystallites, i.e., the texture in a metallic polycrystal governs, plastic anisotropy, electrical and magnetic properties of the material. For simulating the microstructure and texture evolution during forming processes, representative volume elements (RVEs) often generated based on experimental measurements are commonly used. While the grain size and morphology of polycrystals are often determined via light-optical microscopy, their texture is conventionally analyzed through diffraction experiments. Data from these different experiments must be correlated such that a representative set of sampled orientations is assigned to the grains in the RVE. Here, the concept Texture Sampling through Orientation Optimization (TSOO) is introduced, where based on the intensity the required number of orientations is first assigned to the grains of the RVE directly. Then the Bunge–Euler angles of all orientations are optimized in turn with respect to the experimental measurements. As orientations are assigned to grains of variable size during optimization, the compatibility between inhomogeneity in the microstructure and texture is inherently addressed. This method was tested for different microstructures of non-oriented electrical steels and showed good accuracy for homogenous and inhomogeneous grain size distributions.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Yuan Li ◽  
Muli Liu ◽  
JunPing Liu ◽  
Yali Yang ◽  
Xue Gong

Abstract The local binary pattern (LBP) and its variants have shown their effectiveness in texture images representation. However, most of these LBP methods only focus on the histogram of LBP patterns, ignoring the spatial contextual information among them. In this paper, a uniform three-structure descriptor method was proposed by using three different encoding methods so as to obtain the local spatial contextual information for characterizing the nonuniform texture on the surface of colored spun fabrics. The testing results of 180 samples with 18 different color schemes indicate that the established texture representation model can accurately express the nonuniform texture structure of colored spun fabrics. In addition, the overall correlation index between texture features and sample parameters is 0.027 and 0.024, respectively. When compared with the LBP and its variants, the proposed method obtains a higher representational ability, and simultaneously owns a shorter time complexity. At the same time, the algorithm proposed in this paper enjoys ideal effectiveness and universality for fabric image retrieval. The mean Average Precision (mAP) of the first group of samples is 86.2%; in the second group of samples, the mAP of the sample with low twist coefficient is 89.6%, while the mAP of the sample with high twist coefficient is 88.5%.


2021 ◽  
Author(s):  
Suguru Wakita ◽  
Taiki Orima ◽  
Isamu Motoyoshi

Recent advances in brain decoding have made it possible to classify image categories based on neural activity. Increasing numbers of studies have further attempted to reconstruct the image itself. However, because images of objects and scenes inherently involve spatial layout information, the reconstruction usually requires retinotopically organized neural data with high spatial resolution, such as fMRI signals. In contrast, spatial layout does not matter in the perception of 'texture', which is known to be represented as spatially global image statistics in the visual cortex. This property of 'texture' enables us to reconstruct the perceived image from EEG signals, which have a low spatial resolution. Here, we propose an MVAE-based approach for reconstructing texture images from visual evoked potentials measured from observers viewing natural textures such as the textures of various surfaces and object ensembles. This approach allowed us to reconstruct images that perceptually resemble the original textures with a photographic appearance. A subsequent analysis of the dynamic development of the internal texture representation in the VGG network showed that the reproductivity of texture rapidly improves at 200 ms latency in the lower layers but improves more gradually in the higher layers. The present approach can be used as a method for decoding the highly detailed 'impression' of sensory stimuli from brain activity.


Author(s):  
Amey Thakur ◽  
Hasan Rizvi ◽  
Mega Satish

In the present study, we propose to implement a new framework for estimating generative models via an adversarial process to extend an existing GAN framework and develop a white-box controllable image cartoonization, which can generate high-quality cartooned images/videos from real-world photos and videos. The learning purposes of our system are based on three distinct representations: surface representation, structure representation, and texture representation. The surface representation refers to the smooth surface of the images. The structure representation relates to the sparse colour blocks and compresses generic content. The texture representation shows the texture, curves, and features in cartoon images. Generative Adversarial Network (GAN) framework decomposes the images into different representations and learns from them to generate cartoon images. This decomposition makes the framework more controllable and flexible which allows users to make changes based on the required output. This approach overcomes any previous system in terms of maintaining clarity, colours, textures, shapes of images yet showing the characteristics of cartoon images.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Yuting Du ◽  
Tong Qiao ◽  
Ming Xu ◽  
Ning Zheng

Most existing face authentication systems have limitations when facing the challenge raised by presentation attacks, which probably leads to some dangerous activities when using facial unlocking for smart device, facial access to control system, and face scan payment. Accordingly, as a security guarantee to prevent the face authentication from being attacked, the study of face presentation attack detection is developed in this community. In this work, a face presentation attack detector is designed based on residual color texture representation (RCTR). Existing methods lack of effective data preprocessing, and we propose to adopt DW-filter for obtaining residual image, which can effectively improve the detection efficiency. Subsequently, powerful CM texture descriptor is introduced, which performs better than widely used descriptors such as LBP or LPQ. Additionally, representative texture features are extracted from not only RGB space but also more discriminative color spaces such as HSV, YCbCr, and CIE 1976 L∗a∗b (LAB). Meanwhile, the RCTR is fed into the well-designed classifier. Specifically, we compare and analyze the performance of advanced classifiers, among which an ensemble classifier based on a probabilistic voting decision is our optimal choice. Extensive experimental results empirically verify the proposed face presentation attack detector’s superior performance both in the cases of intradataset and interdataset (mismatched training-testing samples) evaluation.


Sign in / Sign up

Export Citation Format

Share Document