scholarly journals Human Sensitivity to Perturbations Constrained by a Model of the Natural Image Manifold

2018 ◽  
Author(s):  
Ingo Fruend ◽  
Elee Stalker

Humans are remarkably well tuned to the statistical properties of natural images. However, quantitative characterization of processing within the domain of natural images has been difficult because most parametric manipulations of a natural image make that image appear less natural. We used generative adversarial networks (GANs) to constrain parametric manipulations to remain within an approximation of the manifold of natural images. In the first experiment, 7 observers decided which one of two synthetic perturbed images matched a synthetic unperturbed comparison image. Observers were significantly more sensitive to perturbations that were constrained to an approximate manifold of natural images than they were to perturbations applied directly in pixel space. Trial by trial errors were consistent with the idea that these perturbations disrupt configural aspects of visual structure used in image segmentation. In a second experiment, 5 observers discriminated paths along the image manifold as recovered by the GAN. Observers were remarkably good at this task, confirming that observers were tuned to fairly detailed properties of an approximate manifold of natural images. We conclude that human tuning to natural images is more general than detecting deviations from natural appearance, and that humans have, to some extent, access to detailed interrelations between natural images.

2021 ◽  
Vol 13 (9) ◽  
pp. 1713
Author(s):  
Songwei Gu ◽  
Rui Zhang ◽  
Hongxia Luo ◽  
Mengyao Li ◽  
Huamei Feng ◽  
...  

Deep learning is an important research method in the remote sensing field. However, samples of remote sensing images are relatively few in real life, and those with markers are scarce. Many neural networks represented by Generative Adversarial Networks (GANs) can learn from real samples to generate pseudosamples, rather than traditional methods that often require more time and man-power to obtain samples. However, the generated pseudosamples often have poor realism and cannot be reliably used as the basis for various analyses and applications in the field of remote sensing. To address the abovementioned problems, a pseudolabeled sample generation method is proposed in this work and applied to scene classification of remote sensing images. The improved unconditional generative model that can be learned from a single natural image (Improved SinGAN) with an attention mechanism can effectively generate enough pseudolabeled samples from a single remote sensing scene image sample. Pseudosamples generated by the improved SinGAN model have stronger realism and relatively less training time, and the extracted features are easily recognized in the classification network. The improved SinGAN can better identify sub-jects from images with complex ground scenes compared with the original network. This mechanism solves the problem of geographic errors of generated pseudosamples. This study incorporated the generated pseudosamples into training data for the classification experiment. The result showed that the SinGAN model with the integration of the attention mechanism can better guarantee feature extraction of the training data. Thus, the quality of the generated samples is improved and the classification accuracy and stability of the classification network are also enhanced.


2020 ◽  
Vol 10 (15) ◽  
pp. 5032
Author(s):  
Xiaochang Wu ◽  
Xiaolin Tian

Medical image segmentation is a classic challenging problem. The segmentation of parts of interest in cardiac medical images is a basic task for cardiac image diagnosis and guided surgery. The effectiveness of cardiac segmentation directly affects subsequent medical applications. Generative adversarial networks have achieved outstanding success in image segmentation compared with classic neural networks by solving the oversegmentation problem. Cardiac X-ray images are prone to weak edges, artifacts, etc. This paper proposes an adaptive generative adversarial network for cardiac segmentation to improve the segmentation rate of X-ray images by generative adversarial networks. The adaptive generative adversarial network consists of three parts: a feature extractor, a discriminator and a selector. In this method, multiple generators are trained in the feature extractor. The discriminator scores the features of different dimensions. The selector selects the appropriate features and adjusts the network for the next iteration. With the help of the discriminator, this method uses multinetwork joint feature extraction to achieve network adaptivity. This method allows features of multiple dimensions to be combined to perform joint training of the network to enhance its generalization ability. The results of cardiac segmentation experiments on X-ray chest radiographs show that this method has higher segmentation accuracy and less overfitting than other methods. In addition, the proposed network is more stable.


Author(s):  
Tianyu Guo ◽  
Chang Xu ◽  
Boxin Shi ◽  
Chao Xu ◽  
Dacheng Tao

Generative Adversarial Networks (GANs) have demonstrated a strong ability to fit complex distributions since they were presented, especially in the field of generating natural images. Linear interpolation in the noise space produces a continuously changing in the image space, which is an impressive property of GANs. However, there is no special consideration on this property in the objective function of GANs or its derived models. This paper analyzes the perturbation on the input of the generator and its influence on the generated images. A smooth generator is then developed by investigating the tolerable input perturbation. We further integrate this smooth generator with a gradient penalized discriminator, and design smooth GAN that generates stable and high-quality images. Experiments on real-world image datasets demonstrate the necessity of studying smooth generator and the effectiveness of the proposed algorithm.


2009 ◽  
Vol 26 (1) ◽  
pp. 109-121 ◽  
Author(s):  
WILSON S. GEISLER ◽  
JEFFREY S. PERRY

AbstractCorrectly interpreting a natural image requires dealing properly with the effects of occlusion, and hence, contour grouping across occlusions is a major component of many natural visual tasks. To better understand the mechanisms of contour grouping across occlusions, we (a) measured the pair-wise statistics of edge elements from contours in natural images, as a function of edge element geometry and contrast polarity, (b) derived the ideal Bayesian observer for a contour occlusion task where the stimuli were extracted directly from natural images, and then (c) measured human performance in the same contour occlusion task. In addition to discovering new statistical properties of natural contours, we found that naïve human observers closely parallel ideal performance in our contour occlusion task. In fact, there was no region of the four-dimensional stimulus space (three geometry dimensions and one contrast dimension) where humans did not closely parallel the performance of the ideal observer (i.e., efficiency was approximately constant over the entire space). These results reject many other contour grouping hypotheses and strongly suggest that the neural mechanisms of contour grouping are tightly related to the statistical properties of contours in natural images.


2020 ◽  
Author(s):  
Kun Chen ◽  
Manning Wang ◽  
Zhijian Song

Abstract Background: Deep neural networks have been widely used in medical image segmentation and have achieved state-of-the-art performance in many tasks. However, different from the segmentation of natural images or video frames, the manual segmentation of anatomical structures in medical images needs high expertise so the scale of labeled training data is very small, which is a major obstacle for the improvement of deep neural networks performance in medical image segmentation. Methods: In this paper, we proposed a new end-to-end generation-segmentation framework by integrating Generative Adversarial Networks (GAN) and a segmentation network and train them simultaneously. The novelty is that during the training of the GAN, the intermediate synthetic images generated by the generator of the GAN are used to pre-train the segmentation network. As the advances of the training of the GAN, the synthetic images evolve gradually from being very coarse to containing more realistic textures, and these images help train the segmentation network gradually. After the training of GAN, the segmentation network is then fine-tuned by training with the real labeled images. Results: We evaluated the proposed framework on four different datasets, including 2D cardiac dataset and lung dataset, 3D prostate dataset and liver dataset. Compared with original U-net and CE-Net, our framework can achieve better segmentation performance. Our framework also can get better segmentation results than U-net on small datasets. In addition, our framework is more effective than the usual data augmentation methods. Conclusions: The proposed framework can be used as a pre-train method of segmentation network, which helps to get a better segmentation result. Our method can solve the shortcomings of current data augmentation methods to some extent.


NeuroImage ◽  
2018 ◽  
Vol 181 ◽  
pp. 775-785 ◽  
Author(s):  
K. Seeliger ◽  
U. Güçlü ◽  
L. Ambrogioni ◽  
Y. Güçlütürk ◽  
M.A.J. van Gerven

Author(s):  
Yungang Xu ◽  
Zhigang Zhang ◽  
Lei You ◽  
Jiajia Liu ◽  
Zhiwei Fan ◽  
...  

ABSTRACTSingle-cell RNA-sequencing (scRNA-seq) enables the characterization of transcriptomic profiles at the single-cell resolution with increasingly high throughput. However, it suffers from many sources of technical noises, including insufficient mRNA molecules that lead to excess false zero values, termed dropouts. Computational approaches have been proposed to recover the biologically meaningful expression by borrowing information from similar cells in the observed dataset. However, these methods suffer from oversmoothing and removal of natural cell-to-cell stochasticity in gene expression. Here, we propose the generative adversarial networks (GANs) for scRNA-seq imputation (scIGANs), which uses generated cells rather than observed cells to avoid these limitations and balances the performance between major and rare cell populations. Evaluations based on a variety of simulated and real scRNA-seq datasets show that scIGANs is effective for dropout imputation and enhances various downstream analysis. ScIGANs is robust to small datasets that have very few genes with low expression and/or cell-to-cell variance. ScIGANs works equally well on datasets from different scRNA-seq protocols and is scalable to datasets with over 100,000 cells. We demonstrated in many ways with compelling evidence that scIGANs is not only an application of GANs in omics data but also represents a competing imputation method for the scRNA-seq data.


2021 ◽  
Vol 1 (1) ◽  
pp. 20-22
Author(s):  
Awadelrahman M. A. Ahmed ◽  
Leen A. M. Ali

This paper contributes in automating medical image segmentation by proposing generative adversarial network based models to segment both polyps and instruments in endoscopy images. A main contribution of this paper is providing explanations for the predictions using layer-wise relevance propagation approach, showing which pixels in the input image are more relevant to the predictions. The models achieved 0.46 and 0.70, on Jaccard index and 0.84 and 0.96 accuracy, on the polyp segmentation and the instrument segmentation, respectively.


Sign in / Sign up

Export Citation Format

Share Document