image synthesis
Recently Published Documents


TOTAL DOCUMENTS

1167
(FIVE YEARS 518)

H-INDEX

44
(FIVE YEARS 14)

NeuroImage ◽  
2022 ◽  
Vol 247 ◽  
pp. 118812
Author(s):  
Zijin Gu ◽  
Keith Wakefield Jamison ◽  
Meenakshi Khosla ◽  
Emily J. Allen ◽  
Yihan Wu ◽  
...  
Keyword(s):  

2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Dan Yoon ◽  
Hyoun-Joong Kong ◽  
Byeong Soo Kim ◽  
Woo Sang Cho ◽  
Jung Chan Lee ◽  
...  

AbstractComputer-aided detection (CADe) systems have been actively researched for polyp detection in colonoscopy. To be an effective system, it is important to detect additional polyps that may be easily missed by endoscopists. Sessile serrated lesions (SSLs) are a precursor to colorectal cancer with a relatively higher miss rate, owing to their flat and subtle morphology. Colonoscopy CADe systems could help endoscopists; however, the current systems exhibit a very low performance for detecting SSLs. We propose a polyp detection system that reflects the morphological characteristics of SSLs to detect unrecognized or easily missed polyps. To develop a well-trained system with imbalanced polyp data, a generative adversarial network (GAN) was used to synthesize high-resolution whole endoscopic images, including SSL. Quantitative and qualitative evaluations on GAN-synthesized images ensure that synthetic images are realistic and include SSL endoscopic features. Moreover, traditional augmentation methods were used to compare the efficacy of the GAN augmentation method. The CADe system augmented with GAN synthesized images showed a 17.5% improvement in sensitivity on SSLs. Consequently, we verified the potential of the GAN to synthesize high-resolution images with endoscopic features and the proposed system was found to be effective in detecting easily missed polyps during a colonoscopy.


ACS Omega ◽  
2022 ◽  
Author(s):  
Shusen Liu ◽  
Bhavya Kailkhura ◽  
Jize Zhang ◽  
Anna M. Hiszpanski ◽  
Emily Robertson ◽  
...  

eLife ◽  
2022 ◽  
Vol 11 ◽  
Author(s):  
Jeffrey Wammes ◽  
Kenneth A Norman ◽  
Nicholas Turk-Browne

Studies of hippocampal learning have obtained seemingly contradictory results, with manipulations that increase coactivation of memories sometimes leading to differentiation of these memories, but sometimes not. These results could potentially be reconciled using the nonmonotonic plasticity hypothesis, which posits that representational change (memories moving apart or together) is a U-shaped function of the coactivation of these memories during learning. Testing this hypothesis requires manipulating coactivation over a wide enough range to reveal the full U-shape. To accomplish this, we used a novel neural network image synthesis procedure to create pairs of stimuli that varied parametrically in their similarity in high-level visual regions that provide input to the hippocampus. Sequences of these pairs were shown to human participants during high-resolution fMRI. As predicted, learning changed the representations of paired images in the dentate gyrus as a U-shaped function of image similarity, with neural differentiation occurring only for moderately similar images.


2022 ◽  
Author(s):  
Akshay Vivek Jagadeesh ◽  
Justin Gardner

The human visual ability to recognize objects and scenes is widely thought to rely on representations in category-selective regions of visual cortex. These representations could support object vision by specifically representing objects, or, more simply, by representing complex visual features regardless of the particular spatial arrangement needed to constitute real world objects. That is, by representing visual textures. To discriminate between these hypotheses, we leveraged an image synthesis approach that, unlike previous methods, provides independent control over the complexity and spatial arrangement of visual features. We found that human observers could easily detect a natural object among synthetic images with similar complex features that were spatially scrambled. However, observer models built from BOLD responses from category-selective regions, as well as a model of macaque inferotemporal cortex and Imagenet-trained deep convolutional neural networks, were all unable to identify the real object. This inability was not due to a lack of signal-to-noise, as all of these observer models could predict human performance in image categorization tasks. How then might these texture-like representations in category-selective regions support object perception? An image-specific readout from category-selective cortex yielded a representation that was more selective for natural feature arrangement, showing that the information necessary for object discrimination is available. Thus, our results suggest that the role of human category-selective visual cortex is not to explicitly encode objects but rather to provide a basis set of texture-like features that can be infinitely reconfigured to flexibly learn and identify new object categories.


2022 ◽  
pp. 98-110
Author(s):  
Md Fazle Rabby ◽  
Md Abdullah Al Momin ◽  
Xiali Hei

Generative adversarial networks have been a highly focused research topic in computer vision, especially in image synthesis and image-to-image translation. There are a lot of variations in generative nets, and different GANs are suitable for different applications. In this chapter, the authors investigated conditional generative adversarial networks to generate fake images, such as handwritten signatures. The authors demonstrated an implementation of conditional generative adversarial networks, which can generate fake handwritten signatures according to a condition vector tailored by humans.


2021 ◽  
Author(s):  
Martim Quintas E Sousa ◽  
Joao Pedrosa ◽  
Joana Rocha ◽  
Sofia Cardoso Pereira ◽  
Ana Maria Mendonca ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document