scholarly journals PccGEO: prior constraints conditioned genetic elements optimization

2021 ◽  
Author(s):  
Hanwen Xu ◽  
Pengcheng Zhang ◽  
Haochen Wang ◽  
Lei Wei ◽  
Zhirui Hu ◽  
...  

Functional genetic elements are one of the most essential units for synthetic biology. However, both knowledge-driven and data-driven methodology can hardly accomplish the complicated task of genetic elements design efficiently due to the lack of explicit regulatory logics and training samples. Here, we proposed a knowledge-constraint deep learning model named PccGEO to automatically design functional genetic elements with high success rate and efficiency. PccGEO utilized a novel "fill-in-the-flank" strategy with a conditional generative adversarial network structure to optimize the flanking regions of known functional sequences derived from the biological prior knowledge, which can efficiently capture the implicit patterns with a reduced searching space. We applied PccGEO in the design of Escherichia coli promoters, and found that the implicit patterns in flanking regions matter to the properties of promoters such as the expression level. The PccGEO-designed constitutive and inducible promoters showed more than 91.6% chance of success by in vivo validation. We further utilized PccGEO by setting a limited frequency of nucleotide modifications and surprisingly found that the expression level of E. coli sigma 70 promoters could show up to a 159.3-fold increase with only 10-bp nucleotide modifications. The results supported that the implicit patterns are important in the design of functional gene elements and validated the strong capacity of our method in the efficient design of functional genetic elements.

2020 ◽  
Vol 34 (07) ◽  
pp. 11507-11514
Author(s):  
Jianxin Lin ◽  
Yijun Wang ◽  
Zhibo Chen ◽  
Tianyu He

Unsupervised domain translation has recently achieved impressive performance with Generative Adversarial Network (GAN) and sufficient (unpaired) training data. However, existing domain translation frameworks form in a disposable way where the learning experiences are ignored and the obtained model cannot be adapted to a new coming domain. In this work, we take on unsupervised domain translation problems from a meta-learning perspective. We propose a model called Meta-Translation GAN (MT-GAN) to find good initialization of translation models. In the meta-training procedure, MT-GAN is explicitly trained with a primary translation task and a synthesized dual translation task. A cycle-consistency meta-optimization objective is designed to ensure the generalization ability. We demonstrate effectiveness of our model on ten diverse two-domain translation tasks and multiple face identity translation tasks. We show that our proposed approach significantly outperforms the existing domain translation methods when each domain contains no more than ten training samples.


2020 ◽  
Vol 245 (7) ◽  
pp. 597-605 ◽  
Author(s):  
Tri Vu ◽  
Mucong Li ◽  
Hannah Humayun ◽  
Yuan Zhou ◽  
Junjie Yao

With balanced spatial resolution, penetration depth, and imaging speed, photoacoustic computed tomography (PACT) is promising for clinical translation such as in breast cancer screening, functional brain imaging, and surgical guidance. Typically using a linear ultrasound (US) transducer array, PACT has great flexibility for hand-held applications. However, the linear US transducer array has a limited detection angle range and frequency bandwidth, resulting in limited-view and limited-bandwidth artifacts in the reconstructed PACT images. These artifacts significantly reduce the imaging quality. To address these issues, existing solutions often have to pay the price of system complexity, cost, and/or imaging speed. Here, we propose a deep-learning-based method that explores the Wasserstein generative adversarial network with gradient penalty (WGAN-GP) to reduce the limited-view and limited-bandwidth artifacts in PACT. Compared with existing reconstruction and convolutional neural network approach, our model has shown improvement in imaging quality and resolution. Our results on simulation, phantom, and in vivo data have collectively demonstrated the feasibility of applying WGAN-GP to improve PACT’s image quality without any modification to the current imaging set-up. Impact statement This study has the following main impacts. It offers a promising solution for removing limited-view and limited-bandwidth artifact in PACT using a linear-array transducer and conventional image reconstruction, which have long hindered its clinical translation. Our solution shows unprecedented artifact removal ability for in vivo image, which may enable important applications such as imaging tumor angiogenesis and hypoxia. The study reports, for the first time, the use of an advanced deep-learning model based on stabilized generative adversarial network. Our results have demonstrated its superiority over other state-of-the-art deep-learning methods.


2019 ◽  
Vol 11 (9) ◽  
pp. 1017 ◽  
Author(s):  
Yang Zhang ◽  
Zhangyue Xiong ◽  
Yu Zang ◽  
Cheng Wang ◽  
Jonathan Li ◽  
...  

Road network extraction from remote sensing images has played an important role in various areas. However, due to complex imaging conditions and terrain factors, such as occlusion and shades, it is very challenging to extract road networks with complete topology structures. In this paper, we propose a learning-based road network extraction framework via a Multi-supervised Generative Adversarial Network (MsGAN), which is jointly trained by the spectral and topology features of the road network. Such a design makes the network capable of learning how to “guess” the aberrant road cases, which is caused by occlusion and shadow, based on the relationship between the road region and centerline; thus, it is able to provide a road network with integrated topology. Additionally, we also present a sample quality measurement to efficiently generate a large number of training samples with a little human interaction. Through the experiments on images from various satellites and the comprehensive comparisons to state-of-the-art approaches on the public datasets, it is demonstrated that the proposed method is able to provide high-quality results, especially for the completeness of the road network.


2021 ◽  
Vol 13 (12) ◽  
pp. 2243
Author(s):  
Andrew Hennessy ◽  
Kenneth Clarke ◽  
Megan Lewis

New, accurate and generalizable methods are required to transform the ever-increasing amount of raw hyperspectral data into actionable knowledge for applications such as environmental monitoring and precision agriculture. Here, we apply advances in generative deep learning models to produce realistic synthetic hyperspectral vegetation data, whilst maintaining class relationships. Specifically, a Generative Adversarial Network (GAN) is trained using the Cramér distance on two vegetation hyperspectral datasets, demonstrating the ability to approximate the distribution of the training samples. Evaluation of the synthetic spectra shows that they respect many of the statistical properties of the real spectra, conforming well to the sampled distributions of all real classes. Creation of an augmented dataset consisting of synthetic and original samples was used to train multiple classifiers, with increases in classification accuracy seen under almost all circumstances. Both datasets showed improvements in classification accuracy ranging from a modest 0.16% for the Indian Pines set and a substantial increase of 7.0% for the New Zealand vegetation. Selection of synthetic samples from sparse or outlying regions of the feature space of real spectral classes demonstrated increased discriminatory power over those from more central portions of the distributions.


2020 ◽  
Author(s):  
Adrian J. Green ◽  
Martin J. Mohlenkamp ◽  
Jhuma Das ◽  
Meenal Chaudhari ◽  
Lisa Truong ◽  
...  

AbstractThere are currently 85,000 chemicals registered with the Environmental Protection Agency (EPA) under the Toxic Substances Control Act, but only a small fraction have measured toxicological data. To address this gap, high-throughput screening (HTS) methods are vital. As part of one such HTS effort, embryonic zebrafish were used to examine a suite of morphological and mortality endpoints at six concentrations from over 1,000 unique chemicals found in the ToxCast library (phase 1 and 2). We hypothesized that by using a conditional Generative Adversarial Network (cGAN) and leveraging this large set of toxicity data, plus chemical structure information, we could efficiently predict toxic outcomes of untested chemicals. CAS numbers for each chemical were used to generate textual files containing three-dimensional structural information for each chemical. Utilizing a novel method in this space, we converted the 3D structural information into a weighted set of points while retaining all information about the structure. In vivo toxicity and chemical data were used to train two neural network generators. The first used regression (Go-ZT) while the second utilized cGAN architecture (GAN-ZT) to train a generator to produce toxicity data. Our results showed that both Go-ZT and GAN-ZT models produce similar results, but the cGAN achieved a higher sensitivity (SE) value of 85.7% vs 71.4%. Conversely, Go-ZT attained higher specificity (SP), positive predictive value (PPV), and Kappa results of 67.3%, 23.4%, and 0.21 compared to 24.5%, 14.0%, and 0.03 for the cGAN, respectively. By combining both Go-ZT and GAN-ZT, our consensus model improved the SP, PPV, and Kappa, to 75.5%, 25.0%, and 0.211, respectively, resulting in an area under the receiver operating characteristic (AUROC) of 0.663. Considering their potential use as prescreening tools, these models could provide in vivo toxicity predictions and insight into untested areas of the chemical space to prioritize compounds for HT testing.SummaryA conditional Generative Adversarial Network (cGAN) can leverage a large chemical set of experimental toxicity data plus chemical structure information to predict the toxicity of untested compounds.


2021 ◽  
Author(s):  
Yonatan Winetraub ◽  
Edwin Yuan ◽  
Itamar Terem ◽  
Caroline Yu ◽  
Warren Chan ◽  
...  

Histological haematoxylin and eosin–stained (H&E) tissue sections are used as the gold standard for pathologic detection of cancer, tumour margin detection, and disease diagnosis1. Producing H&E sections, however, is invasive and time-consuming. Non-invasive optical imaging modalities, such as optical coherence tomography (OCT), permit label-free, micron-scale 3D imaging of biological tissue microstructure with significant depth (up to 1mm) and large fields-of-view2, but are difficult to interpret and correlate with clinical ground truth without specialized training3. Here we introduce the concept of a virtual biopsy, using generative neural networks to synthesize virtual H&E sections from OCT images. To do so we have developed a novel technique, “optical barcoding”, which has allowed us to repeatedly extract the 2D OCT slice from a 3D OCT volume that corresponds to a given H&E tissue section, with very high alignment precision down to 25 microns. Using 1,005 prospectively collected human skin sections from Mohs surgery operations of 71 patients, we constructed the largest dataset of H&E images and their corresponding precisely aligned OCT images, and trained a conditional generative adversarial network4 on these image pairs. Our results demonstrate the ability to use OCT images to generate high-fidelity virtual H&E sections and entire 3D H&E volumes. Applying this trained neural network to in vivo OCT images should enable physicians to readily incorporate OCT imaging into their clinical practice, reducing the number of unnecessary biopsy procedures.


2021 ◽  
Vol 13 (10) ◽  
pp. 1894
Author(s):  
Chen Chen ◽  
Hongxiang Ma ◽  
Guorun Yao ◽  
Ning Lv ◽  
Hua Yang ◽  
...  

Since remote sensing images are difficult to obtain and need to go through a complicated administrative procedure for use in China, it cannot meet the requirement of huge training samples for Waterside Change Detection based on deep learning. Recently, data augmentation has become an effective method to address the issue of an absence of training samples. Therefore, an improved Generative Adversarial Network (GAN), i.e., BTD-sGAN (Text-based Deeply-supervised GAN), is proposed to generate training samples for remote sensing images of Anhui Province, China. The principal structure of our model is based on Deeply-supervised GAN(D-sGAN), and D-sGAN is improved from the point of the diversity of the generated samples. First, the network takes Perlin Noise, image segmentation graph, and encoded text vector as input, in which the size of image segmentation graph is adjusted to 128 × 128 to facilitate fusion with the text vector. Then, to improve the diversity of the generated images, the text vector is used to modify the semantic loss of the downsampled text. Finally, to balance the time and quality of image generation, only a two-layer Unet++ structure is used to generate the image. Herein, “Inception Score”, “Human Rank”, and “Inference Time” are used to evaluate the performance of BTD-sGAN, StackGAN++, and GAN-INT-CLS. At the same time, to verify the diversity of the remote sensing images generated by BTD-sGAN, this paper compares the results when the generated images are sent to the remote sensing interpretation network and when the generated images are not added; the results show that the generated image can improve the precision of soil-moving detection by 5%, which proves the effectiveness of the proposed model.


2021 ◽  
Vol 15 ◽  
Author(s):  
Guangcheng Bao ◽  
Bin Yan ◽  
Li Tong ◽  
Jun Shu ◽  
Linyuan Wang ◽  
...  

One of the greatest limitations in the field of EEG-based emotion recognition is the lack of training samples, which makes it difficult to establish effective models for emotion recognition. Inspired by the excellent achievements of generative models in image processing, we propose a data augmentation model named VAE-D2GAN for EEG-based emotion recognition using a generative adversarial network. EEG features representing different emotions are extracted as topological maps of differential entropy (DE) under five classical frequency bands. The proposed model is designed to learn the distributions of these features for real EEG signals and generate artificial samples for training. The variational auto-encoder (VAE) architecture can learn the spatial distribution of the actual data through a latent vector, and is introduced into the dual discriminator GAN to improve the diversity of the generated artificial samples. To evaluate the performance of this model, we conduct a systematic test on two public emotion EEG datasets, the SEED and the SEED-IV. The obtained recognition accuracy of the method using data augmentation shows as 92.5 and 82.3%, respectively, on the SEED and SEED-IV datasets, which is 1.5 and 3.5% higher than that of methods without using data augmentation. The experimental results show that the artificial samples generated by our model can effectively enhance the performance of the EEG-based emotion recognition.


Sign in / Sign up

Export Citation Format

Share Document