scholarly journals Light field camera all-in-focus image acquisition based on angular information

2021 ◽  
Vol 51 (2) ◽  
Author(s):  
Yingchun Wu, , , , ◽  
Xing Cheng ◽  
Jie Liang ◽  
Anhong Wang ◽  
Xianling Zhao

Traditional light field all-in-focus image fusion algorithms are based on the digital refocusing technique. Multi-focused images converted from one single light field image are used to calculate the all-in-focus image and the light field spatial information is used to accomplish the sharpness evaluation. Analyzing the 4D light field from another perspective, an all-in-focus image fusion algorithm based on angular information is presented in this paper. In the proposed method, the 4D light field data are fused directly and a macro-pixel energy difference function based on angular information is established to accomplish the sharpness evaluation. Then the fused 4D data is guided by the dimension increased central sub-aperture image to obtain the refined 4D data. Finally, the all-in-focus image is calculated by integrating the refined 4D light field data. Experimental results show that the fused images calculated by the proposed method have higher visual quality. Quantitative evaluation results also demonstrate the performance of the proposed algorithm. With the light field angular information, the image feature-based index and human perception inspired index of the fused image are improved.

2013 ◽  
Vol 401-403 ◽  
pp. 1381-1384 ◽  
Author(s):  
Zi Juan Luo ◽  
Shuai Ding

t is mostly difficult to get an image that contains all relevant objects in focus, because of the limited depth-of-focus of optical lenses. The multifocus image fusion method can solve the problem effectively. Nonsubsampled Contourlet transform has varying directions and multiple scales. When the Nonsubsampled contourlet transform is introduced to image fusion, the characteristics of original images are taken better and more information for fusion is obtained. A new method of multi-focus image fusion based on Nonsubsampled contourlet transform (NSCT) with the fusion rule of region statistics is proposed in this paper. Firstly, different focus images are decomposed using Nonsubsampled contourlet transform. Then low-bands are integrated using the weighted average, high-bands are integrated using region statistics rule. Next the fused image will be obtained by inverse Nonsubsampled contourlet transform. Finally the experimental results are showed and compared with those of method based on Contourlet transform. Experiments show that the approach can achieve better results than the method based on contourlet transform.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Chao Zhang ◽  
Haojin Hu ◽  
Yonghang Tai ◽  
Lijun Yun ◽  
Jun Zhang

To fuse infrared and visible images in wireless applications, the extraction and transmission of characteristic information security is an important task. The fused image quality depends on the effectiveness of feature extraction and the transmission of image pair characteristics. However, most fusion approaches based on deep learning do not make effective use of the features for image fusion, which results in missing semantic content in the fused image. In this paper, a novel trustworthy image fusion method is proposed to address these issues, which applies convolutional neural networks for feature extraction and blockchain technology to protect sensitive information. The new method can effectively reduce the loss of feature information by making the output of the feature extraction network in each convolutional layer to be fed to the next layer along with the production of the previous layer, and in order to ensure the similarity between the fused image and the original image, the original input image feature map is used as the input of the reconstruction network in the image reconstruction network. Compared to other methods, the experimental results show that our proposed method can achieve better quality and satisfy human perception.


Today’s research era, image fusion is a actual step by step procedure to develop the visualization of any image. It integrates the essential features of more than a couple of images into a individual fused image without taking any artifacts. Multifocus image fusion has a vital key factor in fusion process where it aims to increase the depth of field using extracting focused part from different multiple focused images. In this paper multi-focus image fusion algorithm is proposed where non local mean technique is used in stationary wavelet transform (SWT) to get the sharp and smooth image. Non-local mean function analyses the pixels belonging to the blurring part and improves the image quality. The proposed work is compared with some existing methods. The results are analyzed visually as well as using performance metrics.


2014 ◽  
Vol 530-531 ◽  
pp. 390-393
Author(s):  
Yong Wang

Image processing is the basis of computer vision. Aiming at some problems existed in the traditional image fusion algorithm, a novel algorithm based on shearlet and multi-decision is proposed. At first we discussed multi-focus image fusion and then we use Shearlet transform and multi-decision for image decomposition high-frequency coefficients. Finally, the fused image is obtained through inverse Shearlet transform. Experimental results show that comparing with traditional image fusion algorithms, the proposed approach retains image detail and more clarity.


Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 247
Author(s):  
Areeba Ilyas ◽  
Muhammad Shahid Farid ◽  
Muhammad Hassan Khan ◽  
Marcin Grzegorzek

Multi-focus image fusion is the process of combining focused regions of two or more images to obtain a single all-in-focus image. It is an important research area because a fused image is of high quality and contains more details than the source images. This makes it useful for numerous applications in image enhancement, remote sensing, object recognition, medical imaging, etc. This paper presents a novel multi-focus image fusion algorithm that proposes to group the local connected pixels with similar colors and patterns, usually referred to as superpixels, and use them to separate the focused and de-focused regions of an image. We note that these superpixels are more expressive than individual pixels, and they carry more distinctive statistical properties when compared with other superpixels. The statistical properties of superpixels are analyzed to categorize the pixels as focused or de-focused and to estimate a focus map. A spatial consistency constraint is ensured on the initial focus map to obtain a refined map, which is used in the fusion rule to obtain a single all-in-focus image. Qualitative and quantitative evaluations are performed to assess the performance of the proposed method on a benchmark multi-focus image fusion dataset. The results show that our method produces better quality fused images than existing image fusion techniques.


2016 ◽  
Vol 15 (4) ◽  
pp. 6698-6701
Author(s):  
Navjot Kaur ◽  
Navneet Kaur

Image fusion is a process to combine two or more images so that fused image becomes more informative than input images. Fusion process provides the spectral and spatial information of image. But main problem occurs of computational time when high resolution images are fused. So this paper describe a new algorithm that is based on wavelet transform in which transform is applied after forming the image into different blocks. This algorithm divides the complete image into different blocks andthen comparing the images by finding the mean square error.By using the threshold value wavelet transform is applied to require block. The transformed blocks are fused by using different fusion algorithms like averaging method, maximum or minimum pixel replacement fusion algorithm. By applying inverse of wavelet transform fused image is constructed which is more informative than the input images. The quality of fused image is find out by comparing the fused image by the original image by finding mean square error and peak signal to noise ratio. The whole process of fusion is applied on the complete image and also by using blocking method then by finding the time parameters it can be conclude that the proposed algorithm reduces the computational time by 10 times to the existence method.


2019 ◽  
Vol 28 (4) ◽  
pp. 505-516
Author(s):  
Wei-bin Chen ◽  
Mingxiao Hu ◽  
Lai Zhou ◽  
Hongbin Gu ◽  
Xin Zhang

Abstract Multi-focus image fusion means fusing a completely clear image with a set of images of the same scene and under the same imaging conditions with different focus points. In order to get a clear image that contains all relevant objects in an area, the multi-focus image fusion algorithm is proposed based on wavelet transform. Firstly, the multi-focus images were decomposed by wavelet transform. Secondly, the wavelet coefficients of the approximant and detail sub-images are fused respectively based on the fusion rule. Finally, the fused image was obtained by using the inverse wavelet transform. Among them, for the low-frequency and high-frequency coefficients, we present a fusion rule based on the weighted ratios and the weighted gradient with the improved edge detection operator. The experimental results illustrate that the proposed algorithm is effective for retaining the detailed images.


Sign in / Sign up

Export Citation Format

Share Document