scholarly journals Joint Generative Image Deblurring Aided by Edge Attention Prior and Dynamic Kernel Selection

2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Zhichao Zhang ◽  
Hui Chen ◽  
Xiaoqing Yin ◽  
Jinsheng Deng

Image deblurring is a classic and important problem in industrial fields, such as aviation photo restoration, object recognition in robotics, and autonomous vehicles. Blurry images in real-world scenarios consist of mixed blurring types, such as a natural motion blurring owing to shaking of the camera. Fast deblurring does not deblur the entire image because it is not the best option. Considering the computational costs, it is also better to have an alternative kernel to deblur different objects at a high-semantic level. To achieve better image restoration quality, it is also beneficial to combine the blurring category location and important structural information in terms of specific artifacts and degree of blurring. The goal of blind image deblurring is to restore sharpness from the unknown blurring kernel of an image. Recent deblurring methods tend to reconstruct prior knowledge, neglecting the influence of blur estimation and visual fidelity on image details and structure. Generative adversarial networks(GANs) have recently been attracting considerable attention from both academia and industry because GAN can perfectly generate new data with the same statistics as the training set. Therefore, this study proposes a generative neural architecture and an edge attention algorithm developed to restore vivid multimedia patches. Joint edge generation and image restoration techniques are designed to solve the low-level multimedia retrieval. This multipath refinement fusion network (MRFNet) can not only perform deblurring of images directly but also individual the frames separately from videos. Ablation experiments validate that our generative adversarial network MRFNet performs better in joint training than in multimodel. Compared to other GAN methods, our two-phase method exhibited state-of-the-art performance in terms of speed and accuracy as well as has a significant visual improvement.

2018 ◽  
Vol 32 (34n36) ◽  
pp. 1840087 ◽  
Author(s):  
Qiwei Chen ◽  
Yiming Wang

A blind image deblurring algorithm based on relative gradient and sparse representation is proposed in this paper. The layered method restores the image by three steps: edge extraction, blur kernel estimation and image reconstruction. The positive and negative gradients in texture part have reversal changes, and the edge part that reflects the image structure has only one gradient change. According to the characteristic, the edge of the image is extracted by using the relative gradient of image, so as to estimate the blur kernel of the image. In the stage of image reconstruction, in order to overcome the problem of oversize of the image and the overcomplete dictionary matrix, the image is divided into small blocks. An overcomplete dictionary is used for sparse representation, and the image is reconstructed by the iterative threshold shrinkage method to improve the quality of image restoration. Experimental results show that the proposed method can effectively improve the quality of image restoration.


2020 ◽  
Author(s):  
Adrian J. Green ◽  
Martin J. Mohlenkamp ◽  
Jhuma Das ◽  
Meenal Chaudhari ◽  
Lisa Truong ◽  
...  

AbstractThere are currently 85,000 chemicals registered with the Environmental Protection Agency (EPA) under the Toxic Substances Control Act, but only a small fraction have measured toxicological data. To address this gap, high-throughput screening (HTS) methods are vital. As part of one such HTS effort, embryonic zebrafish were used to examine a suite of morphological and mortality endpoints at six concentrations from over 1,000 unique chemicals found in the ToxCast library (phase 1 and 2). We hypothesized that by using a conditional Generative Adversarial Network (cGAN) and leveraging this large set of toxicity data, plus chemical structure information, we could efficiently predict toxic outcomes of untested chemicals. CAS numbers for each chemical were used to generate textual files containing three-dimensional structural information for each chemical. Utilizing a novel method in this space, we converted the 3D structural information into a weighted set of points while retaining all information about the structure. In vivo toxicity and chemical data were used to train two neural network generators. The first used regression (Go-ZT) while the second utilized cGAN architecture (GAN-ZT) to train a generator to produce toxicity data. Our results showed that both Go-ZT and GAN-ZT models produce similar results, but the cGAN achieved a higher sensitivity (SE) value of 85.7% vs 71.4%. Conversely, Go-ZT attained higher specificity (SP), positive predictive value (PPV), and Kappa results of 67.3%, 23.4%, and 0.21 compared to 24.5%, 14.0%, and 0.03 for the cGAN, respectively. By combining both Go-ZT and GAN-ZT, our consensus model improved the SP, PPV, and Kappa, to 75.5%, 25.0%, and 0.211, respectively, resulting in an area under the receiver operating characteristic (AUROC) of 0.663. Considering their potential use as prescreening tools, these models could provide in vivo toxicity predictions and insight into untested areas of the chemical space to prioritize compounds for HT testing.SummaryA conditional Generative Adversarial Network (cGAN) can leverage a large chemical set of experimental toxicity data plus chemical structure information to predict the toxicity of untested compounds.


2021 ◽  
Author(s):  
Bingcai Wei ◽  
liye zhang ◽  
Kangtao Wang ◽  
Qun Kong ◽  
Zhuang Wang

Abstract Extracting traffic information from images plays an important role in Internet of Vehicle (IoV). However, due to the high-speed movement and bumpiness of the vehicle, motion blur will occur in image acquisition. In addition, in rainy days, because the rain is attached to the lens, the target will be blocked by rain, and the image will be distorted. These problems have caused great obstacles for extracting key information from transportation images, which will affect the real-time judgment of vehicle control system on road conditions, and further cause decision-making errors of the system and even cause traffic accidents. In this paper, we propose a motion blurred restoration and rain removal algorithm for IoV based on Generative Adversarial Network (GAN) and transfer learning. Dynamic scene deblurring and image de-raining are both among the challenging classical tasks in low-level vision tasks. For both tasks, firstly, instead of using ReLU in a conventional residual block, we designed a residual block containing three 256-channel convolutional layers, and we used the Leakly-ReLU activation function. Secondly, we used generative adversarial networks for the image deblurring task with our Resblock, as well as the image de-raining task. Thirdly, experimental results on the synthetic blur dataset GOPRO and the real blur dataset RealBlur confirm the effectiveness of our model for image deblurring. Finally, we can use the pre-trained model for the transfer learning-based image de-raining task and show good results on several datasets.


2020 ◽  
Author(s):  
Ramiro Rodriguez Colmeiro ◽  
Claudio Verrastro ◽  
Daniel Minsky ◽  
Thomas Grosges

Abstract Background: The correction of attenuation effects in Positron Emission Tomography (PET) imaging is fundamental to obtain a correct radiotracer distribution. However direct measurement of this attenuation map is not error-free and normally results in additional ionization radiation dose to the patient. Here, we propose to obtain the whole body attenuation map using a 3D U-Net generative adversarial network. The network is trained to learn the mapping from non attenuation corrected 18-F-fluorodeoxyglucose PET images to a synthetic Computerized Tomography (sCT) and also to label the input voxel tissue. The sCT image is further refined using an adversarial training scheme to recover higher frequency details and lost structures using context information. This work is trained and tested on public available datasets, containing several PET images from different scanners with different radiotracer administration and reconstruction modalities. The network is trained with 108 samples and validated on 10 samples.Results: The sCT generation was tested on 133 samples from 8 distinct datasets. The resulting mean absolute error of the network is 103 ± 18 HU and a peak signal to noise ratio of 18.6 ± 1.5 dB. The generated images show good correlation with the unknown structural information.Conclusions: The proposed deep learning topology is capable of generating whole body attenuation maps from uncorrected PET image data. Moreover, the method accuracy holds in the presence of data form multiple sources and modalities and is trained on publicly available datasets.


2017 ◽  
Author(s):  
Benjamin Sanchez-Lengeling ◽  
Carlos Outeiral ◽  
Gabriel L. Guimaraes ◽  
Alan Aspuru-Guzik

Molecular discovery seeks to generate chemical species tailored to very specific needs. In this paper, we present ORGANIC, a framework based on Objective-Reinforced Generative Adversarial Networks (ORGAN), capable of producing a distribution over molecular space that matches with a certain set of desirable metrics. This methodology combines two successful techniques from the machine learning community: a Generative Adversarial Network (GAN), to create non-repetitive sensible molecular species, and Reinforcement Learning (RL), to bias this generative distribution towards certain attributes. We explore several applications, from optimization of random physicochemical properties to candidates for drug discovery and organic photovoltaic material design.


2020 ◽  
Vol 1 (1) ◽  
pp. 128-140 ◽  
Author(s):  
Mohammad Hatami ◽  
◽  
D Jing ◽  

In this study, two-phase asymmetric peristaltic Carreau-Yasuda nanofluid flow in a vertical and tapered wavy channel is demonstrated and the mixed heat transfer analysis is considered for it. For the modeling, two-phase method is considered to be able to study the nanoparticles concentration as a separate phase. Also it is assumed that peristaltic waves travel along X-axis at a constant speed, c. Furthermore, constant temperatures and constant nanoparticle concentrations are considered for both, left and right walls. This study aims at an analytical solution of the problem by means of least square method (LSM) using the Maple 15.0 mathematical software. Numerical outcomes will be compared. Finally, the effects of most important parameters (Weissenberg number, Prandtl number, Brownian motion parameter, thermophoresis parameter, local temperature and nanoparticle Grashof numbers) on the velocities, temperature and nanoparticles concentration functions are presented. As an important outcome, on the left side of the channel, increasing the Grashof numbers leads to a reduction in velocity profiles, while on the right side, it is the other way around.


2013 ◽  
Vol 24 (5) ◽  
pp. 1143-1154 ◽  
Author(s):  
Shu TANG ◽  
Wei-Guo GONG ◽  
Jian-Hua ZHONG

2021 ◽  
Vol 11 (15) ◽  
pp. 7034
Author(s):  
Hee-Deok Yang

Artificial intelligence technologies and vision systems are used in various devices, such as automotive navigation systems, object-tracking systems, and intelligent closed-circuit televisions. In particular, outdoor vision systems have been applied across numerous fields of analysis. Despite their widespread use, current systems work well under good weather conditions. They cannot account for inclement conditions, such as rain, fog, mist, and snow. Images captured under inclement conditions degrade the performance of vision systems. Vision systems need to detect, recognize, and remove noise because of rain, snow, and mist to boost the performance of the algorithms employed in image processing. Several studies have targeted the removal of noise resulting from inclement conditions. We focused on eliminating the effects of raindrops on images captured with outdoor vision systems in which the camera was exposed to rain. An attentive generative adversarial network (ATTGAN) was used to remove raindrops from the images. This network was composed of two parts: an attentive-recurrent network and a contextual autoencoder. The ATTGAN generated an attention map to detect rain droplets. A de-rained image was generated by increasing the number of attentive-recurrent network layers. We increased the number of visual attentive-recurrent network layers in order to prevent gradient sparsity so that the entire generation was more stable against the network without preventing the network from converging. The experimental results confirmed that the extended ATTGAN could effectively remove various types of raindrops from images.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Yingxi Yang ◽  
Hui Wang ◽  
Wen Li ◽  
Xiaobo Wang ◽  
Shizhao Wei ◽  
...  

Abstract Background Protein post-translational modification (PTM) is a key issue to investigate the mechanism of protein’s function. With the rapid development of proteomics technology, a large amount of protein sequence data has been generated, which highlights the importance of the in-depth study and analysis of PTMs in proteins. Method We proposed a new multi-classification machine learning pipeline MultiLyGAN to identity seven types of lysine modified sites. Using eight different sequential and five structural construction methods, 1497 valid features were remained after the filtering by Pearson correlation coefficient. To solve the data imbalance problem, Conditional Generative Adversarial Network (CGAN) and Conditional Wasserstein Generative Adversarial Network (CWGAN), two influential deep generative methods were leveraged and compared to generate new samples for the types with fewer samples. Finally, random forest algorithm was utilized to predict seven categories. Results In the tenfold cross-validation, accuracy (Acc) and Matthews correlation coefficient (MCC) were 0.8589 and 0.8376, respectively. In the independent test, Acc and MCC were 0.8549 and 0.8330, respectively. The results indicated that CWGAN better solved the existing data imbalance and stabilized the training error. Alternatively, an accumulated feature importance analysis reported that CKSAAP, PWM and structural features were the three most important feature-encoding schemes. MultiLyGAN can be found at https://github.com/Lab-Xu/MultiLyGAN. Conclusions The CWGAN greatly improved the predictive performance in all experiments. Features derived from CKSAAP, PWM and structure schemes are the most informative and had the greatest contribution to the prediction of PTM.


Sign in / Sign up

Export Citation Format

Share Document