scholarly journals Comparative Analysis of Image Fusion Methods According to Spectral Responses of High-Resolution Optical Sensors

2014 ◽  
Vol 30 (2) ◽  
pp. 227-239 ◽  
Author(s):  
Ha-Seong Lee ◽  
Kwan-Young Oh ◽  
Hyung-Sup Jung
2007 ◽  
Vol 6 (8) ◽  
pp. 1224-1230 ◽  
Author(s):  
M. Sasikala ◽  
N. Kumaravel

Author(s):  
Fahimeh Farahnakian ◽  
Parisa Movahedi ◽  
Jussi Poikonen ◽  
Eero Lehtonen ◽  
Dimitrios Makris ◽  
...  

Author(s):  
Irfan Kosesoy ◽  
Abdulkadir Tepecik ◽  
Mufit Cetin ◽  
Altan Mesut

2005 ◽  
Vol 43 (6) ◽  
pp. 1391-1402 ◽  
Author(s):  
Zhijun Wang ◽  
D. Ziou ◽  
C. Armenakis ◽  
D. Li ◽  
Qingquan Li

2021 ◽  
Vol 13 (16) ◽  
pp. 3226
Author(s):  
Jianhao Gao ◽  
Jie Li ◽  
Menghui Jiang

Compared with multispectral sensors, hyperspectral sensors obtain images with high- spectral resolution at the cost of spatial resolution, which constrains the further and precise application of hyperspectral images. An intelligent idea to obtain high-resolution hyperspectral images is hyperspectral and multispectral image fusion. In recent years, many studies have found that deep learning-based fusion methods outperform the traditional fusion methods due to the strong non-linear fitting ability of convolution neural network. However, the function of deep learning-based methods heavily depends on the size and quality of training dataset, constraining the application of deep learning under the situation where training dataset is not available or of low quality. In this paper, we introduce a novel fusion method, which operates in a self-supervised manner, to the task of hyperspectral and multispectral image fusion without training datasets. Our method proposes two constraints constructed by low-resolution hyperspectral images and fake high-resolution hyperspectral images obtained from a simple diffusion method. Several simulation and real-data experiments are conducted with several popular remote sensing hyperspectral data under the condition where training datasets are unavailable. Quantitative and qualitative results indicate that the proposed method outperforms those traditional methods by a large extent.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Lei Yan ◽  
Qun Hao ◽  
Jie Cao ◽  
Rizvi Saad ◽  
Kun Li ◽  
...  

AbstractImage fusion integrates information from multiple images (of the same scene) to generate a (more informative) composite image suitable for human and computer vision perception. The method based on multiscale decomposition is one of the commonly fusion methods. In this study, a new fusion framework based on the octave Gaussian pyramid principle is proposed. In comparison with conventional multiscale decomposition, the proposed octave Gaussian pyramid framework retrieves more information by decomposing an image into two scale spaces (octave and interval spaces). Different from traditional multiscale decomposition with one set of detail and base layers, the proposed method decomposes an image into multiple sets of detail and base layers, and it efficiently retains high- and low-frequency information from the original image. The qualitative and quantitative comparison with five existing methods (on publicly available image databases) demonstrate that the proposed method has better visual effects and scores the highest in objective evaluation.


Author(s):  
Dioline Sara ◽  
Ajay Kumar Mandava ◽  
Arun Kumar ◽  
Shiny Duela ◽  
Anitha Jude

Sign in / Sign up

Export Citation Format

Share Document