scholarly journals Image-based textile decoding

2020 ◽  
pp. 1-14
Author(s):  
Siqiang Chen ◽  
Masahiro Toyoura ◽  
Takamasa Terada ◽  
Xiaoyang Mao ◽  
Gang Xu

A textile fabric consists of countless parallel vertical yarns (warps) and horizontal yarns (wefts). While common looms can weave repetitive patterns, Jacquard looms can weave the patterns without repetition restrictions. A pattern in which the warps and wefts cross on a grid is defined in a binary matrix. The binary matrix can define which warp and weft is on top at each grid point of the Jacquard fabric. The process can be regarded as encoding from pattern to textile. In this work, we propose a decoding method that generates a binary pattern from a textile fabric that has been already woven. We could not use a deep neural network to learn the process based solely on the training set of patterns and observed fabric images. The crossing points in the observed image were not completely located on the grid points, so it was difficult to take a direct correspondence between the fabric images and the pattern represented by the matrix in the framework of deep learning. Therefore, we propose a method that can apply the framework of deep learning viau the intermediate representation of patterns and images. We show how to convert a pattern into an intermediate representation and how to reconvert the output into a pattern and confirm its effectiveness. In this experiment, we confirmed that 93% of correct pattern was obtained by decoding the pattern from the actual fabric images and weaving them again.

Geophysics ◽  
2019 ◽  
Vol 84 (6) ◽  
pp. V333-V350 ◽  
Author(s):  
Siwei Yu ◽  
Jianwei Ma ◽  
Wenlong Wang

Compared with traditional seismic noise attenuation algorithms that depend on signal models and their corresponding prior assumptions, removing noise with a deep neural network is trained based on a large training set in which the inputs are the raw data sets and the corresponding outputs are the desired clean data. After the completion of training, the deep-learning (DL) method achieves adaptive denoising with no requirements of (1) accurate modelings of the signal and noise or (2) optimal parameters tuning. We call this intelligent denoising. We have used a convolutional neural network (CNN) as the basic tool for DL. In random and linear noise attenuation, the training set is generated with artificially added noise. In the multiple attenuation step, the training set is generated with the acoustic wave equation. The stochastic gradient descent is used to solve the optimal parameters for the CNN. The runtime of DL on a graphics processing unit for denoising has the same order as the [Formula: see text]-[Formula: see text] deconvolution method. Synthetic and field results indicate the potential applications of DL in automatic attenuation of random noise (with unknown variance), linear noise, and multiples.


Author(s):  
Rui Guo ◽  
Xiaobin Hu ◽  
Haoming Song ◽  
Pengpeng Xu ◽  
Haoping Xu ◽  
...  

Abstract Purpose To develop a weakly supervised deep learning (WSDL) method that could utilize incomplete/missing survival data to predict the prognosis of extranodal natural killer/T cell lymphoma, nasal type (ENKTL) based on pretreatment 18F-FDG PET/CT results. Methods One hundred and sixty-seven patients with ENKTL who underwent pretreatment 18F-FDG PET/CT were retrospectively collected. Eighty-four patients were followed up for at least 2 years (training set = 64, test set = 20). A WSDL method was developed to enable the integration of the remaining 83 patients with incomplete/missing follow-up information in the training set. To test generalization, these data were derived from three types of scanners. Prediction similarity index (PSI) was derived from deep learning features of images. Its discriminative ability was calculated and compared with that of a conventional deep learning (CDL) method. Univariate and multivariate analyses helped explore the significance of PSI and clinical features. Results PSI achieved area under the curve scores of 0.9858 and 0.9946 (training set) and 0.8750 and 0.7344 (test set) in the prediction of progression-free survival (PFS) with the WSDL and CDL methods, respectively. PSI threshold of 1.0 could significantly differentiate the prognosis. In the test set, WSDL and CDL achieved prediction sensitivity, specificity, and accuracy of 87.50% and 62.50%, 83.33% and 83.33%, and 85.00% and 75.00%, respectively. Multivariate analysis confirmed PSI to be an independent significant predictor of PFS in both the methods. Conclusion The WSDL-based framework was more effective for extracting 18F-FDG PET/CT features and predicting the prognosis of ENKTL than the CDL method.


2018 ◽  
Vol 12 (3) ◽  
pp. 143-157 ◽  
Author(s):  
Håvard Raddum ◽  
Pavol Zajac

Abstract We show how to build a binary matrix from the MRHS representation of a symmetric-key cipher. The matrix contains the cipher represented as an equation system and can be used to assess a cipher’s resistance against algebraic attacks. We give an algorithm for solving the system and compute its complexity. The complexity is normally close to exhaustive search on the variables representing the user-selected key. Finally, we show that for some variants of LowMC, the joined MRHS matrix representation can be used to speed up regular encryption in addition to exhaustive key search.


2016 ◽  
Vol 7 (4) ◽  
pp. 810-822 ◽  
Author(s):  
P. Sonali ◽  
D. Nagesh Kumar

Worldwide, major changes in the climate are expected due to global warming, which leads to temperature variations. To assess the climate change impact on the hydrological cycle, a spatio-temporal change detection study of potential evapotranspiration (PET) along with maximum and minimum temperatures (Tmax and Tmin) over India have been performed for the second half of the 20th century (1950–2005) both at monthly and seasonal scale. From the observed monthly climatology of PET over India, high values of PET are envisioned during the months of March, April, May and June. Temperature is one of the significant factors in explaining changes in PET. Hence seasonal correlations of PET with Tmax and Tmin were analyzed using Spearman rank correlation. Correlation of PET with Tmax was found to be higher compared to that with Tmin. Seasonal variability of trend at each grid point over India was studied for Tmax, Tmin and PET separately. Trend Free Pre-Whitening and Modified Mann Kendall approaches, which consider the effect of serial correlation, were employed for the trend detection analysis. A significant trend was observed in Tmin compared to Tmax and PET. Significant upward trends in Tmax, Tmin and PET were observed over most of the grid points in the interior peninsular region.


10.37236/734 ◽  
2008 ◽  
Vol 15 (1) ◽  
Author(s):  
Uwe Schauz

The main result of this paper is a coefficient formula that sharpens and generalizes Alon and Tarsi's Combinatorial Nullstellensatz. On its own, it is a result about polynomials, providing some information about the polynomial map $P|_{\mathfrak{X}_1\times\cdots\times\mathfrak{X}_n}$ when only incomplete information about the polynomial $P(X_1,\dots,X_n)$ is given.In a very general working frame, the grid points $x\in \mathfrak{X}_1\times\cdots\times\mathfrak{X}_n$ which do not vanish under an algebraic solution – a certain describing polynomial $P(X_1,\dots,X_n)$ – correspond to the explicit solutions of a problem. As a consequence of the coefficient formula, we prove that the existence of an algebraic solution is equivalent to the existence of a nontrivial solution to a problem. By a problem, we mean everything that "owns" both, a set ${\cal S}$, which may be called the set of solutions; and a subset ${\cal S}_{\rm triv}\subseteq{\cal S}$, the set of trivial solutions.We give several examples of how to find algebraic solutions, and how to apply our coefficient formula. These examples are mainly from graph theory and combinatorial number theory, but we also prove several versions of Chevalley and Warning's Theorem, including a generalization of Olson's Theorem, as examples and useful corollaries.We obtain a permanent formula by applying our coefficient formula to the matrix polynomial, which is a generalization of the graph polynomial. This formula is an integrative generalization and sharpening of:1. Ryser's permanent formula.2. Alon's Permanent Lemma.3. Alon and Tarsi's Theorem about orientations and colorings of graphs.Furthermore, in combination with the Vigneron-Ellingham-Goddyn property of planar $n$-regular graphs, the formula contains as very special cases:4. Scheim's formula for the number of edge $n$-colorings of such graphs.5. Ellingham and Goddyn's partial answer to the list coloring conjecture.


Author(s):  
Elena Morotti ◽  
Davide Evangelista ◽  
Elena Loli Piccolomini

Deep Learning is developing interesting tools which are of great interest for inverse imaging applications. In this work, we consider a medical imaging reconstruction task from subsampled measurements, which is an active research field where Convolutional Neural Networks have already revealed their great potential. However, the commonly used architectures are very deep and, hence, prone to overfitting and unfeasible for clinical usages. Inspired by the ideas of the green-AI literature, we here propose a shallow neural network to perform an efficient Learned Post-Processing on images roughly reconstructed by the filtered backprojection algorithm. The results obtained on images from the training set and on unseen images, using both the non-expensive network and the widely used very deep ResUNet show that the proposed network computes images of comparable or higher quality in about one fourth of time.


2007 ◽  
Vol 19 (1) ◽  
pp. 47-79 ◽  
Author(s):  
Abigail Morrison ◽  
Sirko Straube ◽  
Hans Ekkehard Plesser ◽  
Markus Diesmann

Very large networks of spiking neurons can be simulated efficiently in parallel under the constraint that spike times are bound to an equidistant time grid. Within this scheme, the subthreshold dynamics of a wide class of integrate-and-fire-type neuron models can be integrated exactly from one grid point to the next. However, the loss in accuracy caused by restricting spike times to the grid can have undesirable consequences, which has led to interest in interpolating spike times between the grid points to retrieve an adequate representation of network dynamics. We demonstrate that the exact integration scheme can be combined naturally with off-grid spike events found by interpolation. We show that by exploiting the existence of a minimal synaptic propagation delay, the need for a central event queue is removed, so that the precision of event-driven simulation on the level of single neurons is combined with the efficiency of time-driven global scheduling. Further, for neuron models with linear subthreshold dynamics, even local event queuing can be avoided, resulting in much greater efficiency on the single-neuron level. These ideas are exemplified by two implementations of a widely used neuron model. We present a measure for the efficiency of network simulations in terms of their integration error and show that for a wide range of input spike rates, the novel techniques we present are both more accurate and faster than standard techniques.


2021 ◽  
Author(s):  
Ryan Santoso ◽  
Xupeng He ◽  
Marwa Alsinan ◽  
Hyung Kwak ◽  
Hussein Hoteit

Abstract Automatic fracture recognition from borehole images or outcrops is applicable for the construction of fractured reservoir models. Deep learning for fracture recognition is subject to uncertainty due to sparse and imbalanced training set, and random initialization. We present a new workflow to optimize a deep learning model under uncertainty using U-Net. We consider both epistemic and aleatoric uncertainty of the model. We propose a U-Net architecture by inserting dropout layer after every "weighting" layer. We vary the dropout probability to investigate its impact on the uncertainty response. We build the training set and assign uniform distribution for each training parameter, such as the number of epochs, batch size, and learning rate. We then perform uncertainty quantification by running the model multiple times for each realization, where we capture the aleatoric response. In this approach, which is based on Monte Carlo Dropout, the variance map and F1-scores are utilized to evaluate the need to craft additional augmentations or stop the process. This work demonstrates the existence of uncertainty within the deep learning caused by sparse and imbalanced training sets. This issue leads to unstable predictions. The overall responses are accommodated in the form of aleatoric uncertainty. Our workflow utilizes the uncertainty response (variance map) as a measure to craft additional augmentations in the training set. High variance in certain features denotes the need to add new augmented images containing the features, either through affine transformation (rotation, translation, and scaling) or utilizing similar images. The augmentation improves the accuracy of the prediction, reduces the variance prediction, and stabilizes the output. Architecture, number of epochs, batch size, and learning rate are optimized under a fixed-uncertain training set. We perform the optimization by searching the global maximum of accuracy after running multiple realizations. Besides the quality of the training set, the learning rate is the heavy-hitter in the optimization process. The selected learning rate controls the diffusion of information in the model. Under the imbalanced condition, fast learning rates cause the model to miss the main features. The other challenge in fracture recognition on a real outcrop is to optimally pick the parental images to generate the initial training set. We suggest picking images from multiple sides of the outcrop, which shows significant variations of the features. This technique is needed to avoid long iteration within the workflow. We introduce a new approach to address the uncertainties associated with the training process and with the physical problem. The proposed approach is general in concept and can be applied to various deep-learning problems in geoscience.


2021 ◽  
Author(s):  
Jannes Münchmeyer ◽  
Dino Bindi ◽  
Ulf Leser ◽  
Frederik Tilmann

<p><span>The estimation of earthquake source parameters, in particular magnitude and location, in real time is one of the key tasks for earthquake early warning and rapid response. In recent years, several publications introduced deep learning approaches for these fast assessment tasks. Deep learning is well suited for these tasks, as it can work directly on waveforms and </span><span>can</span><span> learn features and their relation from data.</span></p><p><span>A drawback of deep learning models is their lack of interpretability, i.e., it is usually unknown what reasoning the network uses. Due to this issue, it is also hard to estimate how the model will handle new data whose properties differ in some aspects from the training set, for example earthquakes in previously seismically quite regions. The discussions of previous studies usually focused on the average performance of models and did not consider this point in any detail.</span></p><p><span>Here we analyze a deep learning model for real time magnitude and location estimation through targeted experiments and a qualitative error analysis. We conduct our analysis on three large scale regional data sets from regions with diverse seismotectonic settings and network properties: Italy and Japan with dense networks </span><span>(station spacing down to 10 km)</span><span> of strong motion sensors, and North Chile with a sparser network </span><span>(station spacing around 40 km) </span><span>of broadband stations. </span></p><p><span>We obtained several key insights. First, the deep learning model does not seem to follow the classical approaches for magnitude and location estimation. For magnitude, one would classically expect the model to estimate attenuation, but the network rather seems to focus its attention on the spectral composition of the waveforms. For location, one would expect a triangulation approach, but our experiments instead show indications of a fingerprinting approach. </span>Second, we can pinpoint the effect of training data size on model performance. For example, a four times larger training set reduces average errors for both magnitude and location prediction by more than half, and reduces the required time for real time assessment by a factor of four. <span>Third, the model fails for events with few similar training examples. For magnitude, this means that the largest event</span><span>s</span><span> are systematically underestimated. For location, events in regions with few events in the training set tend to get mislocated to regions with more training events. </span><span>These characteristics can have severe consequences in downstream tasks like early warning and need to be taken into account for future model development and evaluation.</span></p>


Author(s):  
Priyanka Nandal

This work represents a simple method for motion transfer (i.e., given a source video of a subject [person] performing some movements or in motion, that movement/motion is transferred to amateur target in different motion). The pose is used as an intermediate representation to perform this translation. To transfer the motion of the source subject to the target subject, the pose is extracted from the source subject, and then the target subject is generated by applying the learned pose to-appearance mapping. To perform this translation, the video is considered as a set of images consisting of all the frames. Generative adversarial networks (GANs) are used to transfer the motion from source subject to the target subject. GANs are an evolving field of deep learning.


Sign in / Sign up

Export Citation Format

Share Document