scholarly journals A 3D High Resolution Generative Deep-learning Network for Fluorescence Microscopy Image

2019 ◽  
Author(s):  
Zhou Hang ◽  
Li Shiwei ◽  
Huang Qing ◽  
Liu Shijie ◽  
Quan Tingwei ◽  
...  

AbstractDeep learning technology enables us acquire high resolution image from low resolution image in biological imaging free from sophisticated optical hardware. However, current methods require a huge number of the precisely registered low-resolution (LR) and high-resolution (HR) volume image pairs. This requirement is challengeable for biological volume imaging. Here, we proposed 3D deep learning network based on dual generative adversarial network (dual-GAN) framework for recovering HR volume images from LR volume images. Our network avoids learning the direct mappings from the LR and HR volume image pairs, which need precisely image registration process. And the cycle consistent network makes the predicted HR volume image faithful to its corresponding LR volume image. The proposed method achieves the recovery of 20x/1.0 NA volume images from 5x/0.16 NA volume images collected by light-sheet microscopy. In essence our method is suitable for the other imaging modalities.

2021 ◽  
Author(s):  
Huan Zhang ◽  
Zhao Zhang ◽  
Haijun Zhang ◽  
Yi Yang ◽  
Shuicheng Yan ◽  
...  

<div>Deep learning based image inpainting methods have improved the performance greatly due to powerful representation ability of deep learning. However, current deep inpainting methods still tend to produce unreasonable structure and blurry texture, implying that image inpainting is still a challenging topic due to the ill-posed property of the task. To address these issues, we propose a novel deep multi-resolution learning-based progressive image inpainting method, termed MR-InpaintNet, which takes the damaged images of different resolutions as input and then fuses the multi-resolution features for repairing the damaged images. The idea is motivated by the fact that images of different resolutions can provide different levels of feature information. Specifically, the low-resolution image provides strong semantic information and the high-resolution image offers detailed texture information. The middle-resolution image can be used to reduce the gap between low-resolution and high-resolution images, which can further refine the inpainting result. To fuse and improve the multi-resolution features, a novel multi-resolution feature learning (MRFL) process is designed, which is consisted of a multi-resolution feature fusion (MRFF) module, an adaptive feature enhancement (AFE) module and a memory enhanced mechanism (MEM) module for information preservation. Then, the refined multi-resolution features contain both rich semantic information and detailed texture information from multiple resolutions. We further handle the refined multiresolution features by the decoder to obtain the recovered image. Extensive experiments on the Paris Street View, Places2 and CelebA-HQ datasets demonstrate that our proposed MRInpaintNet can effectively recover the textures and structures, and performs favorably against state-of-the-art methods.</div>


Author(s):  
Fuqi Mao ◽  
Xiaohan Guan ◽  
Ruoyu Wang ◽  
Wen Yue

As an important tool to study the microstructure and properties of materials, High Resolution Transmission Electron Microscope (HRTEM) images can obtain the lattice fringe image (reflecting the crystal plane spacing information), structure image and individual atom image (which reflects the configuration of atoms or atomic groups in crystal structure). Despite the rapid development of HTTEM devices, HRTEM images still have limited achievable resolution for human visual system. With the rapid development of deep learning technology in recent years, researchers are actively exploring the Super-resolution (SR) model based on deep learning, and the model has reached the current best level in various SR benchmarks. Using SR to reconstruct high-resolution HRTEM image is helpful to the material science research. However, there is one core issue that has not been resolved: most of these super-resolution methods require the training data to exist in pairs. In actual scenarios, especially for HRTEM images, there are no corresponding HR images. To reconstruct high quality HRTEM image, a novel Super-Resolution architecture for HRTEM images is proposed in this paper. Borrowing the idea from Dual Regression Networks (DRN), we introduce an additional dual regression structure to ESRGAN, by training the model with unpaired HRTEM images and paired nature images. Results of extensive benchmark experiments demonstrate that the proposed method achieves better performance than the most resent SISR methods with both quantitative and visual results.


2020 ◽  
Vol 45 (7) ◽  
pp. 1695 ◽  
Author(s):  
Hang Zhou ◽  
Ruiyao Cai ◽  
Tingwei Quan ◽  
Shijie Liu ◽  
Shiwei Li ◽  
...  

Author(s):  
Zainab Mushtaq

Abstract: Malware is routinely used for illegal reasons, and new malware variants are discovered every day. Computer vision in computer security is one of the most significant disciplines of research today, and it has witnessed tremendous growth in the preceding decade due to its efficacy. We employed research in machine-learning and deep-learning technology such as Logistic Regression, ANN, CNN, transfer learning on CNN, and LSTM to arrive at our conclusions. We have published analysis-based results from a range of categorization models in the literature. InceptionV3 was trained using a transfer learning technique, which yielded reasonable results when compared with other methods such as LSTM. On the test dataset, the transferring learning technique was about 98.76 percent accurate, while on the train dataset, it was around 99.6 percent accurate. Keywords: Malware, illegal activity, Deep learning, Network Security,


2020 ◽  
Vol 12 (15) ◽  
pp. 2345 ◽  
Author(s):  
Ahram Song ◽  
Yongil Kim ◽  
Youkyung Han

Object-based image analysis (OBIA) is better than pixel-based image analysis for change detection (CD) in very high-resolution (VHR) remote sensing images. Although the effectiveness of deep learning approaches has recently been proved, few studies have investigated OBIA and deep learning for CD. Previously proposed methods use the object information obtained from the preprocessing and postprocessing phase of deep learning. In general, they use the dominant or most frequently used label information with respect to all the pixels inside an object without considering any quantitative criteria to integrate the deep learning network and object information. In this study, we developed an object-based CD method for VHR satellite images using a deep learning network to denote the uncertainty associated with an object and effectively detect the changes in an area without the ground truth data. The proposed method defines the uncertainty associated with an object and mainly includes two phases. Initially, CD objects were generated by unsupervised CD methods, and the objects were used to train the CD network comprising three-dimensional convolutional layers and convolutional long short-term memory layers. The CD objects were updated according to the uncertainty level after the learning process was completed. Further, the updated CD objects were considered as the training data for the CD network. This process was repeated until the entire area was classified into two classes, i.e., change and no-change, with respect to the object units or defined epoch. The experiments conducted using two different VHR satellite images confirmed that the proposed method achieved the best performance when compared with the performances obtained using the traditional CD approaches. The method was less affected by salt and pepper noise and could effectively extract the region of change in object units without ground truth data. Furthermore, the proposed method can offer advantages associated with unsupervised CD methods and a CD network subjected to postprocessing by effectively utilizing the deep learning technique and object information.


2021 ◽  
Author(s):  
Huan Zhang ◽  
Zhao Zhang ◽  
Haijun Zhang ◽  
Yi Yang ◽  
Shuicheng Yan ◽  
...  

<div>Deep learning based image inpainting methods have improved the performance greatly due to powerful representation ability of deep learning. However, current deep inpainting methods still tend to produce unreasonable structure and blurry texture, implying that image inpainting is still a challenging topic due to the ill-posed property of the task. To address these issues, we propose a novel deep multi-resolution learning-based progressive image inpainting method, termed MR-InpaintNet, which takes the damaged images of different resolutions as input and then fuses the multi-resolution features for repairing the damaged images. The idea is motivated by the fact that images of different resolutions can provide different levels of feature information. Specifically, the low-resolution image provides strong semantic information and the high-resolution image offers detailed texture information. The middle-resolution image can be used to reduce the gap between low-resolution and high-resolution images, which can further refine the inpainting result. To fuse and improve the multi-resolution features, a novel multi-resolution feature learning (MRFL) process is designed, which is consisted of a multi-resolution feature fusion (MRFF) module, an adaptive feature enhancement (AFE) module and a memory enhanced mechanism (MEM) module for information preservation. Then, the refined multi-resolution features contain both rich semantic information and detailed texture information from multiple resolutions. We further handle the refined multiresolution features by the decoder to obtain the recovered image. Extensive experiments on the Paris Street View, Places2 and CelebA-HQ datasets demonstrate that our proposed MRInpaintNet can effectively recover the textures and structures, and performs favorably against state-of-the-art methods.</div>


Algorithms ◽  
2021 ◽  
Vol 14 (8) ◽  
pp. 236
Author(s):  
Haoran Xu ◽  
Xinya Li ◽  
Kaiyi Zhang ◽  
Yanbai He ◽  
Haoran Fan ◽  
...  

Recently, deep learning has enabled a huge leap forward in image inpainting. However, due to the memory and computational limitation, most existing methods are able to handle only low-resolution inputs, typically less than 1 K. With the improvement of Internet transmission capacity and mobile device cameras, the resolution of image and video sources available to users via the cloud or locally is increasing. For high-resolution images, the common inpainting methods simply upsample the inpainted result of the shrinked image to yield a blurry result. In recent years, there is an urgent need to reconstruct the missing high-frequency information in high-resolution images and generate sharp texture details. Hence, we propose a general deep learning framework for high-resolution image inpainting, which first hallucinates a semantically continuous blurred result using low-resolution inpainting and suppresses computational overhead. Then the sharp high-frequency details with original resolution are reconstructed using super-resolution refinement. Experimentally, our method achieves inspiring inpainting quality on 2K and 4K resolution images, ahead of the state-of-the-art high-resolution inpainting technique. This framework is expected to be popularized for high-resolution image editing tasks on personal computers and mobile devices in the future.


2021 ◽  
Vol 261 ◽  
pp. 01021
Author(s):  
Jiwei Li ◽  
Linsheng Li ◽  
Changlu Xu

In the field of defect recognition, deep learning technology has the advantages of strong generalization and high accuracy compared with mainstream machine learning technology. This paper proposes a deep learning network model, which first processes the self-made 3, 600 data sets, and then sends them to the built convolutional neural network model for training. The final result can effectively identify the three defects of lithium battery pole pieces. The accuracy rate is 92%. Compared with the structure of the AlexNet model, the model proposed in this paper has higher accuracy.


Sign in / Sign up

Export Citation Format

Share Document