Appearance generation for colored spun yarn fabric based on conditional image‐to‐image translation

Author(s):  
Ning Zhang ◽  
Jun Xiang ◽  
Jingan Wang ◽  
Ruru Pan ◽  
Weidong Gao
Keyword(s):  
Author(s):  
Neil Rowlands ◽  
Jeff Price ◽  
Michael Kersker ◽  
Seichi Suzuki ◽  
Steve Young ◽  
...  

Three-dimensional (3D) microstructure visualization on the electron microscope requires that the sample be tilted to different positions to collect a series of projections. This tilting should be performed rapidly for on-line stereo viewing and precisely for off-line tomographic reconstruction. Usually a projection series is collected using mechanical stage tilt alone. The stereo pairs must be viewed off-line and the 60 to 120 tomographic projections must be aligned with fiduciary markers or digital correlation methods. The delay in viewing stereo pairs and the alignment problems in tomographic reconstruction could be eliminated or improved by tilting the beam if such tilt could be accomplished without image translation.A microscope capable of beam tilt with simultaneous image shift to eliminate tilt-induced translation has been investigated for 3D imaging of thick (1 μm) biologic specimens. By tilting the beam above and through the specimen and bringing it back below the specimen, a brightfield image with a projection angle corresponding to the beam tilt angle can be recorded (Fig. 1a).


2021 ◽  
Vol 3 (6) ◽  
Author(s):  
Yixin Liu ◽  
Zhen Li ◽  
Yutong Feng ◽  
Juming Yao

AbstractConductive yarn is an important component and connector of electronic and intelligent textiles, and with the development of high-performance and low-cost conductive yarns, it has attracted more attention. Herein, a simple, scalable sizing process was introduced to prepare the graphene-coated conductive cotton yarns. The electron conductive mechanism of fibers and yarns were studied by the percolation and binomial distribution theory, respectively. The conductive paths are formed due to the conductive fibers' contact with each other, and the results revealed that the connection probability of the fibers in the yarn (p) is proportional to the square of the fibers filling coefficient (φ) as p ∝ φ2. The calculation formula of the staple spun yarn resistance can be derived from this conclusion and verified by experiments, which further proves the feasibility of produce conductive cotton yarns by sizing process.


2021 ◽  
Vol 16 ◽  
pp. 155892502110065
Author(s):  
Peng Cui ◽  
Yuan Xue ◽  
Yuexing Liu ◽  
Xianqiang Sun

Yarn-dyed textiles complement digital printing textiles, which hold promise for high production and environmentally friendly energy efficiencies. However, the complicated structures of color-blended yarns lead to unpredictable colors in textile products and become a roadblock to developing nonpollution textile products. In the present work, we propose a framework of intelligent manufacturing of color blended yarn by combining the color prediction algorithm with a self-developed computer numerically controlled (CNC) ring spinning system. The S-N model is used for the prediction of the color blending effect of the ring-spun yarn. The optimized blending ratios of ring-spun yarn are obtained based on the proposed linear model of parameter W. Subsequently, the CNC ring-spinning frame is used to manufacture color-blended yarns, which can configure the constituent fibers in such a way that different sections of yarn exhibit different colors.


Author(s):  
Masoumeh Zareapoor ◽  
Jie Yang

Image-to-Image translation aims to learn an image from a source domain to a target domain. However, there are three main challenges, such as lack of paired datasets, multimodality, and diversity, that are associated with these problems and need to be dealt with. Convolutional neural networks (CNNs), despite of having great performance in many computer vision tasks, they fail to detect the hierarchy of spatial relationships between different parts of an object and thus do not form the ideal representative model we look for. This article presents a new variation of generative models that aims to remedy this problem. We use a trainable transformer, which explicitly allows the spatial manipulation of data within training. This differentiable module can be augmented into the convolutional layers in the generative model, and it allows to freely alter the generated distributions for image-to-image translation. To reap the benefits of proposed module into generative model, our architecture incorporates a new loss function to facilitate an effective end-to-end generative learning for image-to-image translation. The proposed model is evaluated through comprehensive experiments on image synthesizing and image-to-image translation, along with comparisons with several state-of-the-art algorithms.


Author(s):  
Zhi Qiao ◽  
Takashi Kanai

AbstractWe introduce an unsupervised GAN-based model for shading photorealistic hair animations. Our model is much faster than previous rendering algorithms and produces fewer artifacts than other neural image translation methods. The main idea is to extend the Cycle-GAN structure to avoid semitransparent hair appearance and to exactly reproduce the interaction of the lights with the scene. We use two constraints to ensure temporal coherence and highlight stability. Our approach outperforms and is computationally more efficient than previous methods.


Sign in / Sign up

Export Citation Format

Share Document