scholarly journals Image Demosaicing Based on Generative Adversarial Network

2020 ◽  
Vol 2020 ◽  
pp. 1-13 ◽  
Author(s):  
Jingrui Luo ◽  
Jie Wang

Digital cameras with a single sensor use a color filter array (CFA) that captures only one color component in each pixel. Therefore, noise and artifacts will be generated when reconstructing the color image, which reduces the resolution of the image. In this paper, we proposed an image demosaicing method based on generative adversarial network (GAN) to obtain high-quality color images. The proposed network does not need any initial interpolation process in the data preparation phase, which can greatly reduce the computational complexity. The generator of the GAN is designed using the U-net to directly generate the demosaicing images. The dense residual network is used for the discriminator to improve the discriminant ability of the network. We compared the proposed method with several interpolation-based algorithms and the DnCNN. Results from the comparative experiments proved that the proposed method can more effectively eliminate the image artifacts and can better recover the color image.

2013 ◽  
Vol 718-720 ◽  
pp. 2050-2054 ◽  
Author(s):  
Gwang Gil Jeon

Almost all digital cameras adopt a color filter array to acquire images and requesting a demosaicking process of the sub-sampled color components to have the full color image. Thus, it is necessary to restore the CFA image correctly. Otherwise, perceptible color errors are presented. This paper proposes a color interpolation algorithm based on filter. The CFA we used is modified Bayer CFA. Simulation results show that the proposed method is effective and yield high performance in CPSNR and S-CIELAB.


2013 ◽  
Vol 705 ◽  
pp. 319-322
Author(s):  
Gwang Gil Jeon

mageries are acquired by digital cameras using a single sensor covered with a color filter array (CFA). The most generally employed CFA pattern is Bayer CFA. Therefore in the acquired CFA imagery, each pixel includes only one of three colors: they are red, green, and blue. This CFA color interpolation methods reconstruct losing color information of the other two primary colors for every single pixel. In a single pair of Bayer CFA, there are two green pixels and one red pixel and one blue pixel. In this paper, we interchanged green pixel with other colors. The performance comparison is shown in Experimental results section.


2013 ◽  
Vol 717 ◽  
pp. 493-496
Author(s):  
Gwang Gil Jeon

This paper addresses the issue of the quincunx patterned green channel interpolation method that is obtained by single sensor cameras. Our goal is to reconstruct the green channel in Bayer color filter array (CFA) data. We present a new filter-based method for the reduction of image artifacts in green channel. To reconstruct green channel, we trained a filter using least squares method. Experimental results confirm the effectiveness of the proposed method. Compared to other bilinear and bicubic filters, the improvement in quality has been achieved.


2021 ◽  
Author(s):  
Kazutake Uehira ◽  
Hiroshi Unno

A technique for removing unnecessary patterns from captured images by using a generative network is studied. The patterns, composed of lines and spaces, are superimposed onto a blue component image of RGB color image when the image is captured for the purpose of acquiring a depth map. The superimposed patterns become unnecessary after the depth map is acquired. We tried to remove these unnecessary patterns by using a generative adversarial network (GAN) and an auto encoder (AE). The experimental results show that the patterns can be removed by using a GAN and AE to the point of being invisible. They also show that the performance of GAN is much higher than that of AE and that its PSNR and SSIM were over 45 and about 0.99, respectively. From the results, we demonstrate the effectiveness of the technique with a GAN.


2020 ◽  
Vol 11 (5) ◽  
pp. 37-60
Author(s):  
Chiman Kwan ◽  
Jude Larkin

In modern digital cameras, the Bayer color filter array (CFA) has been widely used. It is also widely known as CFA 1.0. However, Bayer pattern is inferior to the red-green-blue-white (RGBW) pattern, which is also known as CFA 2.0, in low lighting conditions in which Poisson noise is present. It is well known that demosaicing algorithms cannot effectively deal with Poisson noise and additional denoising is needed in order to improve the image quality. In this paper, we propose to evaluate various conventional and deep learning based denoising algorithms for CFA 2.0 in low lighting conditions. We will also investigate the impact of the location of denoising, which refers to whether the denoising is done before or after a critical step of demosaicing. Extensive experiments show that some denoising algorithms can indeed improve the image quality in low lighting conditions. We also noticed that the location of denoising plays an important role in the overall demosaicing performance.


Sensors ◽  
2019 ◽  
Vol 19 (21) ◽  
pp. 4818 ◽  
Author(s):  
Hyun-Koo Kim ◽  
Kook-Yeol Yoo ◽  
Ju H. Park ◽  
Ho-Youl Jung

In this paper, we propose a method of generating a color image from light detection and ranging (LiDAR) 3D reflection intensity. The proposed method is composed of two steps: projection of LiDAR 3D reflection intensity into 2D intensity, and color image generation from the projected intensity by using a fully convolutional network (FCN). The color image should be generated from a very sparse projected intensity image. For this reason, the FCN is designed to have an asymmetric network structure, i.e., the layer depth of the decoder in the FCN is deeper than that of the encoder. The well-known KITTI dataset for various scenarios is used for the proposed FCN training and performance evaluation. Performance of the asymmetric network structures are empirically analyzed for various depth combinations for the encoder and decoder. Through simulations, it is shown that the proposed method generates fairly good visual quality of images while maintaining almost the same color as the ground truth image. Moreover, the proposed FCN has much higher performance than conventional interpolation methods and generative adversarial network based Pix2Pix. One interesting result is that the proposed FCN produces shadow-free and daylight color images. This result is caused by the fact that the LiDAR sensor data is produced by the light reflection and is, therefore, not affected by sunlight and shadow.


Sign in / Sign up

Export Citation Format

Share Document