scholarly journals True Orthoimage Generation Using Airborne LiDAR Data with Generative Adversarial Network-Based Deep Learning Model

2021 ◽  
Vol 2021 ◽  
pp. 1-25
Author(s):  
Young Ha Shin ◽  
Dong-Cheon Lee

Orthoimage, which is geometrically equivalent to a map, is one of the important geospatial products. Displacement and occlusion in optical images are caused by perspective projection, camera tilt, and object relief. A digital surface model (DSM) is essential data for generating true orthoimages to correct displacement and to recover occlusion areas. Light detection and ranging (LiDAR) data collected from an airborne laser scanner (ALS) system is a major source of DSM. The traditional methods require sophisticated procedures to produce a true orthoimage. Most methods utilize 3D coordinates of the DSM and multiview images with overlapping areas for orthorectifying displacement and detecting and recovering occlusion areas. LiDAR point cloud data provides not only 3D coordinates but also intensity information reflected from object surfaces in the georeferenced orthoprojected space. This paper proposes true orthoimage generation based on a generative adversarial network (GAN) deep learning (DL) with the Pix2Pix model using intensity and DSM of the LiDAR data. The major advantage of using LiDAR data is that the data is occlusion-free true orthoimage in terms of projection geometry except in the case of low image quality. Intensive experiments were performed using the benchmark datasets provided by the International Society for Photogrammetry and Remote Sensing (ISPRS). The results demonstrate that the proposed approach could have the capability of efficiently generating true orthoimages directly from LiDAR data. However, it is crucial to find appropriate preprocessing to improve the quality of the intensity of the LiDAR data to produce a higher quality of the true orthoimages.

2020 ◽  
Author(s):  
Yiu-ming Cheung ◽  
Mengke Li

Complete face recovering (CFR) is to recover the complete face image of a given partial face image of a target person whose photo may not be included in the gallery set. The CFR has several attractive potential applications but is challenging. As far as we know, the CFR problem has yet to be explored in the literature. This paper therefore proposes an identity-preserved CFR approach (IP-CFR) to addressing the CFR. First, a denoising auto-encoder based network is applied to acquire the discriminative feature. Then, we propose an identity-preserved loss function to keep the personal identity information. Furthermore, the acquired features are fed into a new variant of the generative adversarial network (GAN) to restore the complete face image. In addition, a two-pathway discriminator is leveraged to enhance the quality of the recovered image. Experimental results on the benchmark datasets show the promising result of the proposed approach.


2020 ◽  
Author(s):  
Yiu-ming Cheung ◽  
Mengke Li

Complete face recovering (CFR) is to recover the complete face image of a given partial face image of a target person whose photo may not be included in the gallery set. The CFR has several attractive potential applications but is challenging. As far as we know, the CFR problem has yet to be explored in the literature. This paper therefore proposes an identity-preserved CFR approach (IP-CFR) to addressing the CFR. First, a denoising auto-encoder based network is applied to acquire the discriminative feature. Then, we propose an identity-preserved loss function to keep the personal identity information. Furthermore, the acquired features are fed into a new variant of the generative adversarial network (GAN) to restore the complete face image. In addition, a two-pathway discriminator is leveraged to enhance the quality of the recovered image. Experimental results on the benchmark datasets show the promising result of the proposed approach.


Mathematics ◽  
2020 ◽  
Vol 8 (9) ◽  
pp. 1394 ◽  
Author(s):  
Jiaohua Qin ◽  
Jing Wang ◽  
Yun Tan ◽  
Huajun Huang ◽  
Xuyu Xiang ◽  
...  

Traditional image steganography needs to modify or be embedded into the cover image for transmitting secret messages. However, the distortion of the cover image can be easily detected by steganalysis tools which lead the leakage of the secret message. So coverless steganography has become a topic of research in recent years, which has the advantage of hiding secret messages without modification. But current coverless steganography still has problems such as low capacity and poor quality .To solve these problems, we use a generative adversarial network (GAN), an effective deep learning framework, to encode secret messages into the cover image and optimize the quality of the steganographic image by adversaring. Experiments show that our model not only achieves a payload of 2.36 bits per pixel, but also successfully escapes the detection of steganalysis tools.


2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Yirui Wu ◽  
Dabao Wei ◽  
Jun Feng

With the development of the fifth-generation networks and artificial intelligence technologies, new threats and challenges have emerged to wireless communication system, especially in cybersecurity. In this paper, we offer a review on attack detection methods involving strength of deep learning techniques. Specifically, we firstly summarize fundamental problems of network security and attack detection and introduce several successful related applications using deep learning structure. On the basis of categorization on deep learning methods, we pay special attention to attack detection methods built on different kinds of architectures, such as autoencoders, generative adversarial network, recurrent neural network, and convolutional neural network. Afterwards, we present some benchmark datasets with descriptions and compare the performance of representing approaches to show the current working state of attack detection methods with deep learning structures. Finally, we summarize this paper and discuss some ways to improve the performance of attack detection under thoughts of utilizing deep learning structures.


2021 ◽  
Vol 12 ◽  
Author(s):  
Nandhini Abirami R. ◽  
Durai Raj Vincent P. M.

Image enhancement is considered to be one of the complex tasks in image processing. When the images are captured under dim light, the quality of the images degrades due to low visibility degenerating the vision-based algorithms’ performance that is built for very good quality images with better visibility. After the emergence of a deep neural network number of methods has been put forward to improve images captured under low light. But, the results shown by existing low-light enhancement methods are not satisfactory because of the lack of effective network structures. A low-light image enhancement technique (LIMET) with a fine-tuned conditional generative adversarial network is presented in this paper. The proposed approach employs two discriminators to acquire a semantic meaning that imposes the obtained results to be realistic and natural. Finally, the proposed approach is evaluated with benchmark datasets. The experimental results highlight that the presented approach attains state-of-the-performance when compared to existing methods. The models’ performance is assessed using Visual Information Fidelitysse, which assesses the generated image’s quality over the degraded input. VIF obtained for different datasets using the proposed approach are 0.709123 for LIME dataset, 0.849982 for DICM dataset, 0.619342 for MEF dataset.


2021 ◽  
Vol 35 (5) ◽  
pp. 395-401
Author(s):  
Mohan Mahanty ◽  
Debnath Bhattacharyya ◽  
Divya Midhunchakkaravarthy

Colon cancer is thought about as the third most regularly identified cancer after Brest and lung cancer. Most colon cancers are adenocarcinomas developing from adenomatous polyps, grow on the intima of the colon. The standard procedure for polyp detection is colonoscopy, where the success of the standard colonoscopy depends on the colonoscopist experience and other environmental factors. Nonetheless, throughout colonoscopy procedures, a considerable number (8-37%) of polyps are missed due to human mistakes, and these missed polyps are the prospective reason for colorectal cancer cells. In the last few years, many research groups developed deep learning-based computer-aided (CAD) systems that recommended many techniques for automated polyp detection, localization, and segmentation. Still, accurate polyp detection, segmentation is required to minimize polyp miss out rates. This paper suggested a Super-Resolution Generative Adversarial Network (SRGAN) assisted Encoder-Decoder network for fully automated colon polyp segmentation from colonoscopic images. The proposed deep learning model incorporates the SRGAN in the up-sampling process to achieve more accurate polyp segmentation. We examined our model on the publicly available benchmark datasets CVC-ColonDB and Warwick- QU. The model accomplished a dice score of 0.948 on the CVC-ColonDB dataset, surpassed the recently advanced state-of-the-art (SOTA) techniques. When it is evaluated on the Warwick-QU dataset, it attains a Dice Score of 0.936 on part A and 0.895 on Part B. Our model showed more accurate results for sessile and smaller-sized polyps.


Author(s):  
M. Cao ◽  
H. Ji ◽  
Z. Gao ◽  
T. Mei

Abstract. Vehicle detection in remote sensing image has been attracting remarkable attention over past years for its applications in traffic, security, military, and surveillance fields. Due to the stunning success of deep learning techniques in object detection community, we consider to utilize CNNs for vehicle detection task in remote sensing image. Specifically, we take advantage of deep residual network, multi-scale feature fusion, hard example mining and homography augmentation to realize vehicle detection, which almost integrates all the advanced techniques in deep learning community. Furthermore, we simultaneously address super-resolution (SR) and detection problems of low-resolution (LR) image in an end-to-end manner. In consideration of the absence of paired low-/highresolution data which are generally time-consuming and cumbersome to collect, we leverage generative adversarial network (GAN) for unsupervised SR. Detection loss is back-propagated to SR generator to boost detection performance. We conduct experiments on representative benchmark datasets and demonstrate that our model yields significant improvements over state-of-the-art methods in deep learning and remote sensing areas.


Optik ◽  
2021 ◽  
Vol 227 ◽  
pp. 166060
Author(s):  
Yangdi Hu ◽  
Zhengdong Cheng ◽  
Xiaochun Fan ◽  
Zhenyu Liang ◽  
Xiang Zhai

Information ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 249
Author(s):  
Xin Jin ◽  
Yuanwen Zou ◽  
Zhongbing Huang

The cell cycle is an important process in cellular life. In recent years, some image processing methods have been developed to determine the cell cycle stages of individual cells. However, in most of these methods, cells have to be segmented, and their features need to be extracted. During feature extraction, some important information may be lost, resulting in lower classification accuracy. Thus, we used a deep learning method to retain all cell features. In order to solve the problems surrounding insufficient numbers of original images and the imbalanced distribution of original images, we used the Wasserstein generative adversarial network-gradient penalty (WGAN-GP) for data augmentation. At the same time, a residual network (ResNet) was used for image classification. ResNet is one of the most used deep learning classification networks. The classification accuracy of cell cycle images was achieved more effectively with our method, reaching 83.88%. Compared with an accuracy of 79.40% in previous experiments, our accuracy increased by 4.48%. Another dataset was used to verify the effect of our model and, compared with the accuracy from previous results, our accuracy increased by 12.52%. The results showed that our new cell cycle image classification system based on WGAN-GP and ResNet is useful for the classification of imbalanced images. Moreover, our method could potentially solve the low classification accuracy in biomedical images caused by insufficient numbers of original images and the imbalanced distribution of original images.


Sign in / Sign up

Export Citation Format

Share Document