Disease-Image Specific Generative Adversarial Network for Brain Disease Diagnosis with Incomplete Multi-modal Neuroimages

Author(s):  
Yongsheng Pan ◽  
Mingxia Liu ◽  
Chunfeng Lian ◽  
Yong Xia ◽  
Dinggang Shen
2021 ◽  
Author(s):  
Yonatan Winetraub ◽  
Edwin Yuan ◽  
Itamar Terem ◽  
Caroline Yu ◽  
Warren Chan ◽  
...  

Histological haematoxylin and eosin–stained (H&E) tissue sections are used as the gold standard for pathologic detection of cancer, tumour margin detection, and disease diagnosis1. Producing H&E sections, however, is invasive and time-consuming. Non-invasive optical imaging modalities, such as optical coherence tomography (OCT), permit label-free, micron-scale 3D imaging of biological tissue microstructure with significant depth (up to 1mm) and large fields-of-view2, but are difficult to interpret and correlate with clinical ground truth without specialized training3. Here we introduce the concept of a virtual biopsy, using generative neural networks to synthesize virtual H&E sections from OCT images. To do so we have developed a novel technique, “optical barcoding”, which has allowed us to repeatedly extract the 2D OCT slice from a 3D OCT volume that corresponds to a given H&E tissue section, with very high alignment precision down to 25 microns. Using 1,005 prospectively collected human skin sections from Mohs surgery operations of 71 patients, we constructed the largest dataset of H&E images and their corresponding precisely aligned OCT images, and trained a conditional generative adversarial network4 on these image pairs. Our results demonstrate the ability to use OCT images to generate high-fidelity virtual H&E sections and entire 3D H&E volumes. Applying this trained neural network to in vivo OCT images should enable physicians to readily incorporate OCT imaging into their clinical practice, reducing the number of unnecessary biopsy procedures.


2021 ◽  
Vol 15 ◽  
Author(s):  
Wanyun Lin ◽  
Weiming Lin ◽  
Gang Chen ◽  
Hejun Zhang ◽  
Qinquan Gao ◽  
...  

Combining multi-modality data for brain disease diagnosis such as Alzheimer’s disease (AD) commonly leads to improved performance than those using a single modality. However, it is still challenging to train a multi-modality model since it is difficult in clinical practice to obtain complete data that includes all modality data. Generally speaking, it is difficult to obtain both magnetic resonance images (MRI) and positron emission tomography (PET) images of a single patient. PET is expensive and requires the injection of radioactive substances into the patient’s body, while MR images are cheaper, safer, and more widely used in practice. Discarding samples without PET data is a common method in previous studies, but the reduction in the number of samples will result in a decrease in model performance. To take advantage of multi-modal complementary information, we first adopt the Reversible Generative Adversarial Network (RevGAN) model to reconstruct the missing data. After that, a 3D convolutional neural network (CNN) classification model with multi-modality input was proposed to perform AD diagnosis. We have evaluated our method on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database, and compared the performance of the proposed method with those using state-of-the-art methods. The experimental results show that the structural and functional information of brain tissue can be mapped well and that the image synthesized by our method is close to the real image. In addition, the use of synthetic data is beneficial for the diagnosis and prediction of Alzheimer’s disease, demonstrating the effectiveness of the proposed framework.


2017 ◽  
Author(s):  
Benjamin Sanchez-Lengeling ◽  
Carlos Outeiral ◽  
Gabriel L. Guimaraes ◽  
Alan Aspuru-Guzik

Molecular discovery seeks to generate chemical species tailored to very specific needs. In this paper, we present ORGANIC, a framework based on Objective-Reinforced Generative Adversarial Networks (ORGAN), capable of producing a distribution over molecular space that matches with a certain set of desirable metrics. This methodology combines two successful techniques from the machine learning community: a Generative Adversarial Network (GAN), to create non-repetitive sensible molecular species, and Reinforcement Learning (RL), to bias this generative distribution towards certain attributes. We explore several applications, from optimization of random physicochemical properties to candidates for drug discovery and organic photovoltaic material design.


Author(s):  
Annapoorani Gopal ◽  
Lathaselvi Gandhimaruthian ◽  
Javid Ali

The Deep Neural Networks have gained prominence in the biomedical domain, becoming the most commonly used networks after machine learning technology. Mammograms can be used to detect breast cancers with high precision with the help of Convolutional Neural Network (CNN) which is deep learning technology. An exhaustive labeled data is required to train the CNN from scratch. This can be overcome by deploying Generative Adversarial Network (GAN) which comparatively needs lesser training data during a mammogram screening. In the proposed study, the application of GANs in estimating breast density, high-resolution mammogram synthesis for clustered microcalcification analysis, effective segmentation of breast tumor, analysis of the shape of breast tumor, extraction of features and augmentation of the image during mammogram classification have been extensively reviewed.


2019 ◽  
Vol 52 (21) ◽  
pp. 291-296 ◽  
Author(s):  
Minsung Sung ◽  
Jason Kim ◽  
Juhwan Kim ◽  
Son-Cheol Yu

Sign in / Sign up

Export Citation Format

Share Document