Super-Resolution Based on Generative Adversarial Network for HRTEM Images

Author(s):  
Fuqi Mao ◽  
Xiaohan Guan ◽  
Ruoyu Wang ◽  
Wen Yue

As an important tool to study the microstructure and properties of materials, High Resolution Transmission Electron Microscope (HRTEM) images can obtain the lattice fringe image (reflecting the crystal plane spacing information), structure image and individual atom image (which reflects the configuration of atoms or atomic groups in crystal structure). Despite the rapid development of HTTEM devices, HRTEM images still have limited achievable resolution for human visual system. With the rapid development of deep learning technology in recent years, researchers are actively exploring the Super-resolution (SR) model based on deep learning, and the model has reached the current best level in various SR benchmarks. Using SR to reconstruct high-resolution HRTEM image is helpful to the material science research. However, there is one core issue that has not been resolved: most of these super-resolution methods require the training data to exist in pairs. In actual scenarios, especially for HRTEM images, there are no corresponding HR images. To reconstruct high quality HRTEM image, a novel Super-Resolution architecture for HRTEM images is proposed in this paper. Borrowing the idea from Dual Regression Networks (DRN), we introduce an additional dual regression structure to ESRGAN, by training the model with unpaired HRTEM images and paired nature images. Results of extensive benchmark experiments demonstrate that the proposed method achieves better performance than the most resent SISR methods with both quantitative and visual results.

2019 ◽  
Author(s):  
Zhou Hang ◽  
Li Shiwei ◽  
Huang Qing ◽  
Liu Shijie ◽  
Quan Tingwei ◽  
...  

AbstractDeep learning technology enables us acquire high resolution image from low resolution image in biological imaging free from sophisticated optical hardware. However, current methods require a huge number of the precisely registered low-resolution (LR) and high-resolution (HR) volume image pairs. This requirement is challengeable for biological volume imaging. Here, we proposed 3D deep learning network based on dual generative adversarial network (dual-GAN) framework for recovering HR volume images from LR volume images. Our network avoids learning the direct mappings from the LR and HR volume image pairs, which need precisely image registration process. And the cycle consistent network makes the predicted HR volume image faithful to its corresponding LR volume image. The proposed method achieves the recovery of 20x/1.0 NA volume images from 5x/0.16 NA volume images collected by light-sheet microscopy. In essence our method is suitable for the other imaging modalities.


2019 ◽  
Author(s):  
Linjing Fang ◽  
Fred Monroe ◽  
Sammy Weiser Novak ◽  
Lyndsey Kirk ◽  
Cara Rae Schiavon ◽  
...  

Point scanning imaging systems (e.g. scanning electron or laser scanning confocal microscopes) are perhaps the most widely used tools for high resolution cellular and tissue imaging. Like all other imaging modalities, the resolution, speed, sample preservation, and signal-to-noise ratio (SNR) of point scanning systems are difficult to optimize simultaneously. In particular, point scanning systems are uniquely constrained by an inverse relationship between imaging speed and pixel resolution. Here we show these limitations can be mitigated via the use of deep learning-based super-sampling of undersampled images acquired on a point-scanning system, which we termed point-scanning super-resolution (PSSR) imaging. Oversampled, high SNR ground truth images acquired on scanning electron or Airyscan laser scanning confocal microscopes were "crappified" to generate semi-synthetic training data for PSSR models that were then used to restore real-world undersampled images. Remarkably, our EM PSSR model could restore undersampled images acquired with different optics, detectors, samples, or sample preparation methods in other labs. PSSR enabled previously unattainable 2 nm resolution images with our serial block face scanning electron microscope system. For fluorescence, we show that undersampled confocal images combined with a multiframe PSSR model trained on Airyscan timelapses facilitates Airyscan-equivalent spatial resolution and SNR with ~100x lower laser dose and 16x higher frame rates than corresponding high-resolution acquisitions. In conclusion, PSSR facilitates point-scanning image acquisition with otherwise unattainable resolution, speed, and sensitivity.


Author(s):  
Annapoorani Gopal ◽  
Lathaselvi Gandhimaruthian ◽  
Javid Ali

The Deep Neural Networks have gained prominence in the biomedical domain, becoming the most commonly used networks after machine learning technology. Mammograms can be used to detect breast cancers with high precision with the help of Convolutional Neural Network (CNN) which is deep learning technology. An exhaustive labeled data is required to train the CNN from scratch. This can be overcome by deploying Generative Adversarial Network (GAN) which comparatively needs lesser training data during a mammogram screening. In the proposed study, the application of GANs in estimating breast density, high-resolution mammogram synthesis for clustered microcalcification analysis, effective segmentation of breast tumor, analysis of the shape of breast tumor, extraction of features and augmentation of the image during mammogram classification have been extensively reviewed.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3913 ◽  
Author(s):  
Mingxuan Li ◽  
Ou Li ◽  
Guangyi Liu ◽  
Ce Zhang

With the recently explosive growth of deep learning, automatic modulation recognition has undergone rapid development. Most of the newly proposed methods are dependent on large numbers of labeled samples. We are committed to using fewer labeled samples to perform automatic modulation recognition in the cognitive radio domain. Here, a semi-supervised learning method based on adversarial training is proposed which is called signal classifier generative adversarial network. Most of the prior methods based on this technology involve computer vision applications. However, we improve the existing network structure of a generative adversarial network by adding the encoder network and a signal spatial transform module, allowing our framework to address radio signal processing tasks more efficiently. These two technical improvements effectively avoid nonconvergence and mode collapse problems caused by the complexity of the radio signals. The results of simulations show that compared with well-known deep learning methods, our method improves the classification accuracy on a synthetic radio frequency dataset by 0.1% to 12%. In addition, we verify the advantages of our method in a semi-supervised scenario and obtain a significant increase in accuracy compared with traditional semi-supervised learning methods.


2020 ◽  
Vol 10 (12) ◽  
pp. 4282
Author(s):  
Ghada Zamzmi ◽  
Sivaramakrishnan Rajaraman ◽  
Sameer Antani

Medical images are acquired at different resolutions based on clinical goals or available technology. In general, however, high-resolution images with fine structural details are preferred for visual task analysis. Recognizing this significance, several deep learning networks have been proposed to enhance medical images for reliable automated interpretation. These deep networks are often computationally complex and require a massive number of parameters, which restrict them to highly capable computing platforms with large memory banks. In this paper, we propose an efficient deep learning approach, called Hydra, which simultaneously reduces computational complexity and improves performance. The Hydra consists of a trunk and several computing heads. The trunk is a super-resolution model that learns the mapping from low-resolution to high-resolution images. It has a simple architecture that is trained using multiple scales at once to minimize a proposed learning-loss function. We also propose to append multiple task-specific heads to the trained Hydra trunk for simultaneous learning of multiple visual tasks in medical images. The Hydra is evaluated on publicly available chest X-ray image collections to perform image enhancement, lung segmentation, and abnormality classification. Our experimental results support our claims and demonstrate that the proposed approach can improve the performance of super-resolution and visual task analysis in medical images at a remarkably reduced computational cost.


Author(s):  
F. Pineda ◽  
V. Ayma ◽  
C. Beltran

Abstract. High-resolution satellite images have always been in high demand due to the greater detail and precision they offer, as well as the wide scope of the fields in which they could be applied; however, satellites in operation offering very high-resolution (VHR) images has experienced an important increase, but they remain as a smaller proportion against existing lower resolution (HR) satellites. Recent models of convolutional neural networks (CNN) are very suitable for applications with image processing, like resolution enhancement of images; but in order to obtain an acceptable result, it is important, not only to define the kind of CNN architecture but the reference set of images to train the model. Our work proposes an alternative to improve the spatial resolution of HR images obtained by Sentinel-2 satellite by using the VHR images from PeruSat1, a peruvian satellite, which serve as the reference for the super-resolution approach implementation based on a Generative Adversarial Network (GAN) model, as an alternative for obtaining VHR images. The VHR PeruSat-1 image dataset is used for the training process of the network. The results obtained were analyzed considering the Peak Signal to Noise Ratios (PSNR) and the Structural Similarity (SSIM). Finally, some visual outcomes, over a given testing dataset, are presented so the performance of the model could be analyzed as well.


Author(s):  
S. M. Tilon ◽  
F. Nex ◽  
D. Duarte ◽  
N. Kerle ◽  
G. Vosselman

Abstract. Degradation and damage detection provides essential information to maintenance workers in routine monitoring and to first responders in post-disaster scenarios. Despite advance in Earth Observation (EO), image analysis and deep learning techniques, the quality and quantity of training data for deep learning is still limited. As a result, no robust method has been found yet that can transfer and generalize well over a variety of geographic locations and typologies of damages. Since damages can be seen as anomalies, occurring sparingly over time and space, we propose to use an anomaly detecting Generative Adversarial Network (GAN) to detect damages. The main advantages of using GANs are that only healthy unannotated images are needed, and that a variety of damages, including the never before seen damage, can be detected. In this study we aimed to investigate 1) the ability of anomaly detecting GANs to detect degradation (potholes and cracks) in asphalt road infrastructures using Mobile Mapper imagery and building damage (collapsed buildings, rubble piles) using post-disaster aerial imagery, and 2) the sensitivity of this method against various types of pre-processing. Our results show that we can detect damages in urban scenes at satisfying levels but not on asphalt roads. Future work will investigate how to further classify the found damages and how to improve damage detection for asphalt roads.


2019 ◽  
Vol 38 (11) ◽  
pp. 872a1-872a9 ◽  
Author(s):  
Mauricio Araya-Polo ◽  
Stuart Farris ◽  
Manuel Florez

Exploration seismic data are heavily manipulated before human interpreters are able to extract meaningful information regarding subsurface structures. This manipulation adds modeling and human biases and is limited by methodological shortcomings. Alternatively, using seismic data directly is becoming possible thanks to deep learning (DL) techniques. A DL-based workflow is introduced that uses analog velocity models and realistic raw seismic waveforms as input and produces subsurface velocity models as output. When insufficient data are used for training, DL algorithms tend to overfit or fail. Gathering large amounts of labeled and standardized seismic data sets is not straightforward. This shortage of quality data is addressed by building a generative adversarial network (GAN) to augment the original training data set, which is then used by DL-driven seismic tomography as input. The DL tomographic operator predicts velocity models with high statistical and structural accuracy after being trained with GAN-generated velocity models. Beyond the field of exploration geophysics, the use of machine learning in earth science is challenged by the lack of labeled data or properly interpreted ground truth, since we seldom know what truly exists beneath the earth's surface. The unsupervised approach (using GANs to generate labeled data)illustrates a way to mitigate this problem and opens geology, geophysics, and planetary sciences to more DL applications.


2020 ◽  
Vol 10 (21) ◽  
pp. 7755 ◽  
Author(s):  
Liangliang Chen ◽  
Ning Yan ◽  
Hongmai Yang ◽  
Linlin Zhu ◽  
Zongwei Zheng ◽  
...  

Deep learning technology is outstanding in visual inspection. However, in actual industrial production, the use of deep learning technology for visual inspection requires a large number of training data with different acquisition scenarios. At present, the acquisition of such datasets is very time-consuming and labor-intensive, which limits the further development of deep learning in industrial production. To solve the problem of image data acquisition difficulty in industrial production with deep learning, this paper proposes a data augmentation method for deep learning based on multi-degree of freedom (DOF) automatic image acquisition and designs a multi-DOF automatic image acquisition system for deep learning. By designing random acquisition angles and random illumination conditions, different acquisition scenes in actual production are simulated. By optimizing the image acquisition path, a large number of accurate data can be obtained in a short time. In order to verify the performance of the dataset collected by the system, the fabric is selected as the research object after the system is built, and the dataset comparison experiment is carried out. The dataset comparison experiment confirms that the dataset obtained by the system is rich and close to the real application environment, which solves the problem of dataset insufficient in the application process of deep learning to a certain extent.


Sign in / Sign up

Export Citation Format

Share Document